Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
NASA Astrophysics Data System (ADS)
Jia, Bing
2014-03-01
A comb-shaped chaotic region has been simulated in multiple two-dimensional parameter spaces using the Hindmarsh—Rose (HR) neuron model in many recent studies, which can interpret almost all of the previously simulated bifurcation processes with chaos in neural firing patterns. In the present paper, a comb-shaped chaotic region in a two-dimensional parameter space was reproduced, which presented different processes of period-adding bifurcations with chaos with changing one parameter and fixed the other parameter at different levels. In the biological experiments, different period-adding bifurcation scenarios with chaos by decreasing the extra-cellular calcium concentration were observed from some neural pacemakers at different levels of extra-cellular 4-aminopyridine concentration and from other pacemakers at different levels of extra-cellular caesium concentration. By using the nonlinear time series analysis method, the deterministic dynamics of the experimental chaotic firings were investigated. The period-adding bifurcations with chaos observed in the experiments resembled those simulated in the comb-shaped chaotic region using the HR model. The experimental results show that period-adding bifurcations with chaos are preserved in different two-dimensional parameter spaces, which provides evidence of the existence of the comb-shaped chaotic region and a demonstration of the simulation results in different two-dimensional parameter spaces in the HR neuron model. The results also present relationships between different firing patterns in two-dimensional parameter spaces.
NASA Astrophysics Data System (ADS)
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty
Exploring Replica-Exchange Wang-Landau sampling in higher-dimensional parameter space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valentim, Alexandra; Rocha, Julio C. S.; Tsai, Shan-Ho
We considered a higher-dimensional extension for the replica-exchange Wang-Landau algorithm to perform a random walk in the energy and magnetization space of the two-dimensional Ising model. This hybrid scheme combines the advantages of Wang-Landau and Replica-Exchange algorithms, and the one-dimensional version of this approach has been shown to be very efficient and to scale well, up to several thousands of computing cores. This approach allows us to split the parameter space of the system to be simulated into several pieces and still perform a random walk over the entire parameter range, ensuring the ergodicity of the simulation. Previous work, inmore » which a similar scheme of parallel simulation was implemented without using replica exchange and with a different way to combine the result from the pieces, led to discontinuities in the final density of states over the entire range of parameters. From our simulations, it appears that the replica-exchange Wang-Landau algorithm is able to overcome this diculty, allowing exploration of higher parameter phase space by keeping track of the joint density of states.« less
An open-source job management framework for parameter-space exploration: OACIS
NASA Astrophysics Data System (ADS)
Murase, Y.; Uchitane, T.; Ito, N.
2017-11-01
We present an open-source software framework for parameter-space exporation, named OACIS, which is useful to manage vast amount of simulation jobs and results in a systematic way. Recent development of high-performance computers enabled us to explore parameter spaces comprehensively, however, in such cases, manual management of the workflow is practically impossible. OACIS is developed aiming at reducing the cost of these repetitive tasks when conducting simulations by automating job submissions and data management. In this article, an overview of OACIS as well as a getting started guide are presented.
Variations of cosmic large-scale structure covariance matrices across parameter space
NASA Astrophysics Data System (ADS)
Reischke, Robert; Kiessling, Alina; Schäfer, Björn Malte
2017-03-01
The likelihood function for cosmological parameters, given by e.g. weak lensing shear measurements, depends on contributions to the covariance induced by the non-linear evolution of the cosmic web. As highly non-linear clustering to date has only been described by numerical N-body simulations in a reliable and sufficiently precise way, the necessary computational costs for estimating those covariances at different points in parameter space are tremendous. In this work, we describe the change of the matter covariance and the weak lensing covariance matrix as a function of cosmological parameters by constructing a suitable basis, where we model the contribution to the covariance from non-linear structure formation using Eulerian perturbation theory at third order. We show that our formalism is capable of dealing with large matrices and reproduces expected degeneracies and scaling with cosmological parameters in a reliable way. Comparing our analytical results to numerical simulations, we find that the method describes the variation of the covariance matrix found in the SUNGLASS weak lensing simulation pipeline within the errors at one-loop and tree-level for the spectrum and the trispectrum, respectively, for multipoles up to ℓ ≤ 1300. We show that it is possible to optimize the sampling of parameter space where numerical simulations should be carried out by minimizing interpolation errors and propose a corresponding method to distribute points in parameter space in an economical way.
Planetary and Space Simulation Facilities PSI at DLR for Astrobiology
NASA Astrophysics Data System (ADS)
Rabbow, E.; Rettberg, P.; Panitz, C.; Reitz, G.
2008-09-01
Ground based experiments, conducted in the controlled planetary and space environment simulation facilities PSI at DLR, are used to investigate astrobiological questions and to complement the corresponding experiments in LEO, for example on free flying satellites or on space exposure platforms on the ISS. In-orbit exposure facilities can only accommodate a limited number of experiments for exposure to space parameters like high vacuum, intense radiation of galactic and solar origin and microgravity, sometimes also technically adapted to simulate extraterrestrial planetary conditions like those on Mars. Ground based experiments in carefully equipped and monitored simulation facilities allow the investigation of the effects of simulated single environmental parameters and selected combinations on a much wider variety of samples. In PSI at DLR, international science consortia performed astrobiological investigations and space experiment preparations, exposing organic compounds and a wide range of microorganisms, reaching from bacterial spores to complex microbial communities, lichens and even animals like tardigrades to simulated planetary or space environment parameters in pursuit of exobiological questions on the resistance to extreme environments and the origin and distribution of life. The Planetary and Space Simulation Facilities PSI of the Institute of Aerospace Medicine at DLR in Köln, Germany, providing high vacuum of controlled residual composition, ionizing radiation of a X-ray tube, polychromatic UV radiation in the range of 170-400 nm, VIS and IR or individual monochromatic UV wavelengths, and temperature regulation from -20°C to +80°C at the sample size individually or in selected combinations in 9 modular facilities of varying sizes are presented with selected experiments performed within.
Nagasaki, Masao; Yamaguchi, Rui; Yoshida, Ryo; Imoto, Seiya; Doi, Atsushi; Tamada, Yoshinori; Matsuno, Hiroshi; Miyano, Satoru; Higuchi, Tomoyuki
2006-01-01
We propose an automatic construction method of the hybrid functional Petri net as a simulation model of biological pathways. The problems we consider are how we choose the values of parameters and how we set the network structure. Usually, we tune these unknown factors empirically so that the simulation results are consistent with biological knowledge. Obviously, this approach has the limitation in the size of network of interest. To extend the capability of the simulation model, we propose the use of data assimilation approach that was originally established in the field of geophysical simulation science. We provide genomic data assimilation framework that establishes a link between our simulation model and observed data like microarray gene expression data by using a nonlinear state space model. A key idea of our genomic data assimilation is that the unknown parameters in simulation model are converted as the parameter of the state space model and the estimates are obtained as the maximum a posteriori estimators. In the parameter estimation process, the simulation model is used to generate the system model in the state space model. Such a formulation enables us to handle both the model construction and the parameter tuning within a framework of the Bayesian statistical inferences. In particular, the Bayesian approach provides us a way of controlling overfitting during the parameter estimations that is essential for constructing a reliable biological pathway. We demonstrate the effectiveness of our approach using synthetic data. As a result, parameter estimation using genomic data assimilation works very well and the network structure is suitably selected.
Naden, Levi N; Shirts, Michael R
2016-04-12
We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free energy.
Planetary and Space Simulation Facilities (PSI) at DLR
NASA Astrophysics Data System (ADS)
Panitz, Corinna; Rabbow, E.; Rettberg, P.; Kloss, M.; Reitz, G.; Horneck, G.
2010-05-01
The Planetary and Space Simulation facilities at DLR offer the possibility to expose biological and physical samples individually or integrated into space hardware to defined and controlled space conditions like ultra high vacuum, low temperature and extraterrestrial UV radiation. An x-ray facility stands for the simulation of the ionizing component at the disposal. All of the simulation facilities are required for the preparation of space experiments: - for testing of the newly developed space hardware - for investigating the effect of different space parameters on biological systems as a preparation for the flight experiment - for performing the 'Experiment Verification Tests' (EVT) for the specification of the test parameters - and 'Experiment Sequence Tests' (EST) by simulating sample assemblies, exposure to selected space parameters, and sample disassembly. To test the compatibility of the different biological and chemical systems and their adaptation to the opportunities and constraints of space conditions a profound ground support program has been developed among many others for the ESA facilities of the ongoing missions EXPOSE-R and EXPOSE-E on board of the International Space Station ISS . Several experiment verification tests EVTs and an experiment sequence test EST have been conducted in the carefully equipped and monitored planetary and space simulation facilities PSI of the Institute of Aerospace Medicine at DLR in Cologne, Germany. These ground based pre-flight studies allowed the investigation of a much wider variety of samples and the selection of the most promising organisms for the flight experiment. EXPOSE-E had been attached to the outer balcony of the European Columbus module of the ISS in February 2008 and stayed for 1,5 years in space; EXPOSE-R has been attached to the Russian Svezda module of the ISS in spring 2009 and mission duration will be approx. 1,5 years. The missions will give new insights into the survivability of terrestrial organisms in space and will contribute to the understanding of the organic chemistry processes in space, the biological adaptation strategies to extreme conditions, e.g. on early Earth and Mars, and the distribution of life beyond its planet of origin The results gained during the simulation experiments demonstrated mission preparation as a basic requirement for successful and significant results of every space flight experiment. Hence, the Mission preparation program that was performed in the context of the space missions EXPOSE-E and EXPOSE-R proofed the outstanding importance and accentuated need for ground based experiments before and during a space mission. The facilities are also necessary for the performance of the ground control experiment during the mission, the so-called Mission Simulation Test (MST) under simulated space conditions, by parallel exposure of samples to simulated space parameters according to flight data received by telemetry. Finally the facilities also provide the possibility to simulate the surface and climate conditions of the planet Mars. In this way they offer the possibility to investigate under simulated Mars conditions the chances for development of life on Mars and to gain previous knowledge for the search for life on today's Mars and in this context especially the parameters for a manned mission to Mars. References [1] Rabbow E, Rettberg P, Panitz C, Drescher J, Horneck G, Reitz G (2005) SSIOUX - Space Simulation for Investigating Organics, Evolution and Exobiology, Adv. Space Res. 36 (2) 297-302, doi:10.1016/j.asr.2005.08.040Aman, A. and Bman, B. (1997) JGR, 90,1151-1154. [2] Fekete A, Modos K, Hegedüs M, Kovacs G, Ronto Gy, Peter A, Lammer H, Panitz C (2005) DNA Damage under simulated extraterrestrial conditions in bacteriophage T7 Adv. Space Res. 305-310Aman, A. et al. (1997) Meteoritics & Planet. Sci., 32,A74. [3] Cockell Ch, Schuerger AC, Billi D., Friedmann EI, Panitz C (2005) Effects of a Simulated Martian UV Flux on the Cyanobacterium, Chroococcidiopsis sp. 029, Astrobiology, 5/2 127-140Aman, A. (1996) LPS XXVII, 1344-1 [4] de la Torre Noetzel, R.; Sancho, L.G.; Pintado,A.; Rettberg, Petra; Rabbow, Elke; Panitz,Corinna; Deutschmann, U.; Reina, M.; Horneck, Gerda (2007): BIOPAN experiment LICHENS on the Foton M2 mission Pre-flight verification tests of the Rhizocarpon geographicum-granite ecosystem. COSPAR [Hrsg.]: Advances in Space Research, 40, Elsevier, S. 1665 - 1671, DOI 10.1016/j.asr.2007.02.022
pypet: A Python Toolkit for Data Management of Parameter Explorations
Meyer, Robert; Obermayer, Klaus
2016-01-01
pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines. PMID:27610080
pypet: A Python Toolkit for Data Management of Parameter Explorations.
Meyer, Robert; Obermayer, Klaus
2016-01-01
pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.
A Tool for Parameter-space Explorations
NASA Astrophysics Data System (ADS)
Murase, Yohsuke; Uchitane, Takeshi; Ito, Nobuyasu
A software for managing simulation jobs and results, named "OACIS", is presented. It controls a large number of simulation jobs executed in various remote servers, keeps these results in an organized way, and manages the analyses on these results. The software has a web browser front end, and users can submit various jobs to appropriate remote hosts from a web browser easily. After these jobs are finished, all the result files are automatically downloaded from the computational hosts and stored in a traceable way together with the logs of the date, host, and elapsed time of the jobs. Some visualization functions are also provided so that users can easily grasp the overview of the results distributed in a high-dimensional parameter space. Thus, OACIS is especially beneficial for the complex simulation models having many parameters for which a lot of parameter searches are required. By using API of OACIS, it is easy to write a code that automates parameter selection depending on the previous simulation results. A few examples of the automated parameter selection are also demonstrated.
NASA Astrophysics Data System (ADS)
Ying, Shen; Li, Lin; Gao, Yurong
2009-10-01
Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra; ...
2017-11-20
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
Observability of ionospheric space-time structure with ISR: A simulation study
NASA Astrophysics Data System (ADS)
Swoboda, John; Semeter, Joshua; Zettergren, Matthew; Erickson, Philip J.
2017-02-01
The sources of error from electronically steerable array (ESA) incoherent scatter radar (ISR) systems are investigated both theoretically and with use of an open-source ISR simulator, developed by the authors, called Simulator for ISR (SimISR). The main sources of error incorporated in the simulator include statistical uncertainty, which arises due to nature of the measurement mechanism and the inherent space-time ambiguity from the sensor. SimISR can take a field of plasma parameters, parameterized by time and space, and create simulated ISR data at the scattered electric field (i.e., complex receiver voltage) level, subsequently processing these data to show possible reconstructions of the original parameter field. To demonstrate general utility, we show a number of simulation examples, with two cases using data from a self-consistent multifluid transport model. Results highlight the significant influence of the forward model of the ISR process and the resulting statistical uncertainty on plasma parameter measurements and the core experiment design trade-offs that must be made when planning observations. These conclusions further underscore the utility of this class of measurement simulator as a design tool for more optimal experiment design efforts using flexible ESA class ISR systems.
Development and testing of a mouse simulated space flight model
NASA Technical Reports Server (NTRS)
Sonnenfeld, Gerald
1987-01-01
The development and testing of a mouse model for simulating some aspects of weightlessness that occurs during space flight, and the carrying out of immunological experiments on animals undergoing space flight is examined. The mouse model developed was an antiorthostatic, hypokinetic, hypodynamic suspension model similar to one used with rats. The study was divided into two parts. The first involved determination of which immunological parameters should be observed on animals flown during space flight or studied in the suspension model. The second involved suspending mice and determining which of those immunological parameters were altered by the suspension. Rats that were actually flown in Space Shuttle SL-3 were used to test the hypotheses.
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models
Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew
2014-01-01
An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J
2013-04-21
Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.
NASA Astrophysics Data System (ADS)
Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.
2013-04-01
Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Xiexiaomen; Tutuncu, Azra; Eustes, Alfred
Enhanced Geothermal Systems (EGS) could potentially use technological advancements in coupled implementation of horizontal drilling and multistage hydraulic fracturing techniques in tight oil and shale gas reservoirs along with improvements in reservoir simulation techniques to design and create EGS reservoirs. In this study, a commercial hydraulic fracture simulation package, Mangrove by Schlumberger, was used in an EGS model with largely distributed pre-existing natural fractures to model fracture propagation during the creation of a complex fracture network. The main goal of this study is to investigate optimum treatment parameters in creating multiple large, planar fractures to hydraulically connect a horizontal injectionmore » well and a horizontal production well that are 10,000 ft. deep and spaced 500 ft. apart from each other. A matrix of simulations for this study was carried out to determine the influence of reservoir and treatment parameters on preventing (or aiding) the creation of large planar fractures. The reservoir parameters investigated during the matrix simulations include the in-situ stress state and properties of the natural fracture set such as the primary and secondary fracture orientation, average fracture length, and average fracture spacing. The treatment parameters investigated during the simulations were fluid viscosity, proppant concentration, pump rate, and pump volume. A final simulation with optimized design parameters was performed. The optimized design simulation indicated that high fluid viscosity, high proppant concentration, large pump volume and pump rate tend to minimize the complexity of the created fracture network. Additionally, a reservoir with 'friendly' formation characteristics such as large stress anisotropy, natural fractures set parallel to the maximum horizontal principal stress (SHmax), and large natural fracture spacing also promote the creation of large planar fractures while minimizing fracture complexity.« less
Hierarchical optimization for neutron scattering problems
Bao, Feng; Archibald, Rick; Bansal, Dipanshu; ...
2016-03-14
In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.
Hierarchical optimization for neutron scattering problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Archibald, Rick; Bansal, Dipanshu
In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.
Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.
Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang
2017-01-01
Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.
Experimental study and simulation of space charge stimulated discharge
NASA Astrophysics Data System (ADS)
Noskov, M. D.; Malinovski, A. S.; Cooke, C. M.; Wright, K. A.; Schwab, A. J.
2002-11-01
The electrical discharge of volume distributed space charge in poly(methylmethacrylate) (PMMA) has been investigated both experimentally and by computer simulation. The experimental space charge was implanted in dielectric samples by exposure to a monoenergetic electron beam of 3 MeV. Electrical breakdown through the implanted space charge region within the sample was initiated by a local electric field enhancement applied to the sample surface. A stochastic-deterministic dynamic model for electrical discharge was developed and used in a computer simulation of these breakdowns. The model employs stochastic rules to describe the physical growth of the discharge channels, and deterministic laws to describe the electric field, the charge, and energy dynamics within the discharge channels and the dielectric. Simulated spatial-temporal and current characteristics of the expanding discharge structure during physical growth are quantitatively compared with the experimental data to confirm the discharge model. It was found that a single fixed set of physically based dielectric parameter values was adequate to simulate the complete family of experimental space charge discharges in PMMA. It is proposed that such a set of parameters also provides a useful means to quantify the breakdown properties of other dielectrics.
ATTIRE (analytical tools for thermal infrared engineering): A sensor simulation and modeling package
NASA Astrophysics Data System (ADS)
Jaggi, S.
1993-02-01
The Advanced Sensor Development Laboratory (ASDL) at the Stennis Space Center develops, maintains and calibrates remote sensing instruments for the National Aeronautics & Space Administration (NASA). To perform system design trade-offs, analysis, and establish system parameters, ASDL has developed a software package for analytical simulation of sensor systems. This package called 'Analytical Tools for Thermal InfraRed Engineering' - ATTIRE, simulates the various components of a sensor system. The software allows each subsystem of the sensor to be analyzed independently for its performance. These performance parameters are then integrated to obtain system level information such as Signal-to-Noise Ratio (SNR), Noise Equivalent Radiance (NER), Noise Equivalent Temperature Difference (NETD) etc. This paper describes the uses of the package and the physics that were used to derive the performance parameters.
Expanding the catalog of binary black-hole simulations: aligned-spin configurations
NASA Astrophysics Data System (ADS)
Chu, Tony; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; SXS Collaboration
2015-04-01
A major goal of numerical relativity is to model the inspiral and merger of binary black holes through sufficiently accurate and long simulations, to enable the successful detection of gravitational waves. However, covering the full parameter space of binary configurations is a computationally daunting task. The SXS Collaboration has made important progress in this direction recently, with a catalog of 174 publicly available binary black-hole simulations [black-holes.org/waveforms]. Nevertheless, the parameter-space coverage remains sparse, even for non-precessing binaries. In this talk, I will describe an addition to the SXS catalog to improve its coverage, consisting of 95 new simulations of aligned-spin binaries with moderate mass ratios and dimensionless spins as high as 0.9. Some applications of these new simulations will also be mentioned.
A SLAM II simulation model for analyzing space station mission processing requirements
NASA Technical Reports Server (NTRS)
Linton, D. G.
1985-01-01
Space station mission processing is modeled via the SLAM 2 simulation language on an IBM 4381 mainframe and an IBM PC microcomputer with 620K RAM, two double-sided disk drives and an 8087 coprocessor chip. Using a time phased mission (payload) schedule and parameters associated with the mission, orbiter (space shuttle) and ground facility databases, estimates for ground facility utilization are computed. Simulation output associated with the science and applications database is used to assess alternative mission schedules.
EVA/ORU model architecture using RAMCOST
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.; Park, Eui H.; Wang, Y. M.; Bretoi, R.
1990-01-01
A parametrically driven simulation model is presented in order to provide a detailed insight into the effects of various input parameters in the life testing of a modular space suit. The RAMCOST model employed is a user-oriented simulation model for studying the life-cycle costs of designs under conditions of uncertainty. The results obtained from the EVA simulated model are used to assess various mission life testing parameters such as the number of joint motions per EVA cycle time, part availability, and number of inspection requirements. RAMCOST first simulates EVA completion for NASA application using a probabilistic like PERT network. With the mission time heuristically determined, RAMCOST then models different orbital replacement unit policies with special application to the astronaut's space suit functional designs.
NASA Astrophysics Data System (ADS)
Donà, G.; Faletra, M.
2015-09-01
This paper presents the TT&C performance simulator toolkit developed internally at Thales Alenia Space Italia (TAS-I) to support the design of TT&C subsystems for space exploration and scientific satellites. The simulator has a modular architecture and has been designed using a model-based approach using standard engineering tools such as MATLAB/SIMULINK and mission analysis tools (e.g. STK). The simulator is easily reconfigurable to fit different types of satellites, different mission requirements and different scenarios parameters. This paper provides a brief description of the simulator architecture together with two examples of applications used to demonstrate some of the simulator’s capabilities.
Burgette, Lane F; Reiter, Jerome P
2013-06-01
Multinomial outcomes with many levels can be challenging to model. Information typically accrues slowly with increasing sample size, yet the parameter space expands rapidly with additional covariates. Shrinking all regression parameters towards zero, as often done in models of continuous or binary response variables, is unsatisfactory, since setting parameters equal to zero in multinomial models does not necessarily imply "no effect." We propose an approach to modeling multinomial outcomes with many levels based on a Bayesian multinomial probit (MNP) model and a multiple shrinkage prior distribution for the regression parameters. The prior distribution encourages the MNP regression parameters to shrink toward a number of learned locations, thereby substantially reducing the dimension of the parameter space. Using simulated data, we compare the predictive performance of this model against two other recently-proposed methods for big multinomial models. The results suggest that the fully Bayesian, multiple shrinkage approach can outperform these other methods. We apply the multiple shrinkage MNP to simulating replacement values for areal identifiers, e.g., census tract indicators, in order to protect data confidentiality in public use datasets.
Implementation of an open-scenario, long-term space debris simulation approach
NASA Astrophysics Data System (ADS)
Stupl, J.; Nelson, B.; Faber, N.; Perez, A.; Carlino, R.; Yang, F.; Henze, C.; Karacalioglu, A.; O'Toole, C.; Swenson, J.
This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance scheme. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps in the order of several (5-15) days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions, space object parameters and orbital parameters of the conjunctions and take place in much smaller timeframes than 5-15 days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in LEO, propagates all objects with high precision, and advances with variable-sized time-steps as small as one second. It allows the assessment of the (potential) impact of changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space-track object catalog in LEO. We then use a high precision propagator to propagate all objects over the entire simulation duration. If collisions are detected, the appropriate number of debris objects are created and inserted into the simulation framework. Depending on the scenario, further objects, e.g. due to new launches, can be added. At the end of the simulation, the total number of objects above a cut-off size and the number of detected collisions provide benchmark parameters for the comparison between scenarios. The simulation approach is computationally intensive as it involves ten thousands of objects; hence we use a highly parallel approach employing up to a thousand cores on the NASA Pleiades supercomputer for a single run. This paper describes our simulation approach, the status of its implementation, the approach in developing scenarios and examples of first test runs.
Just-in-time connectivity for large spiking networks.
Lytton, William W; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L
2008-11-01
The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.
Just in time connectivity for large spiking networks
Lytton, William W.; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L
2008-01-01
The scale of large neuronal network simulations is memory-limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically-relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed – just-in-time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON’s standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory-limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that only added items to the queue when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run. PMID:18533821
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K
2015-01-01
Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less
Simulation analysis of photometric data for attitude estimation of unresolved space objects
NASA Astrophysics Data System (ADS)
Du, Xiaoping; Gou, Ruixin; Liu, Hao; Hu, Heng; Wang, Yang
2017-10-01
The attitude information acquisition of unresolved space objects, such as micro-nano satellites and GEO objects under the way of ground-based optical observations, is a challenge to space surveillance. In this paper, a useful method is proposed to estimate the SO attitude state according to the simulation analysis of photometric data in different attitude states. The object shape model was established and the parameters of the BRDF model were determined, then the space object photometric model was established. Furthermore, the photometric data of space objects in different states are analyzed by simulation and the regular characteristics of the photometric curves are summarized. The simulation results show that the photometric characteristics are useful for attitude inversion in a unique way. Thus, a new idea is provided for space object identification in this paper.
NASA Astrophysics Data System (ADS)
Li, F.; Nie, Z.; Wu, Y. P.; Guo, B.; Zhang, X. H.; Huang, S.; Zhang, J.; Cheng, Z.; Ma, Y.; Fang, Y.; Zhang, C. J.; Wan, Y.; Xu, X. L.; Hua, J. F.; Pai, C. H.; Lu, W.; Mori, W. B.
2018-04-01
We report the transverse phase space diagnostics for electron beams generated through ionization injection in a laser-plasma accelerator. Single-shot measurements of both ultimate emittance and Twiss parameters are achieved by means of permanent magnetic quadrupole. Beams with emittance of μm rad level are obtained in a typical ionization injection scheme, and the dependence on nitrogen concentration and charge density is studied experimentally and confirmed by simulations. A key feature of the transverse phase space, matched beams with Twiss parameter α T ≃ 0, is identified according to the measurement. Numerical simulations that are in qualitative agreement with the experimental results reveal that a sufficient phase mixing induced by an overlong injection length leads to the matched phase space distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, F.; Nie, Z.; Wu, Y. P.
We report the transverse phase space diagnostics for electron beams generated through ionization injection in a laser-plasma accelerator. Single-shot measurements of both ultimate emittance and Twiss parameters are achieved by means of permanent magnetic quadrupole. Beams with emittance of μm rad level are obtained in a typical ionization injection scheme, and the dependence on nitrogen concentration and charge density is studied experimentally and confirmed by simulations. A key feature of the transverse phase space, matched beams with Twiss parameter α T ≃ 0, is identified according to the measurement. Lastly, numerical simulations that are in qualitative agreement with the experimentalmore » results reveal that a sufficient phase mixing induced by an overlong injection length leads to the matched phase space distribution.« less
Li, F.; Nie, Z.; Wu, Y. P.; ...
2018-02-22
We report the transverse phase space diagnostics for electron beams generated through ionization injection in a laser-plasma accelerator. Single-shot measurements of both ultimate emittance and Twiss parameters are achieved by means of permanent magnetic quadrupole. Beams with emittance of μm rad level are obtained in a typical ionization injection scheme, and the dependence on nitrogen concentration and charge density is studied experimentally and confirmed by simulations. A key feature of the transverse phase space, matched beams with Twiss parameter α T ≃ 0, is identified according to the measurement. Lastly, numerical simulations that are in qualitative agreement with the experimentalmore » results reveal that a sufficient phase mixing induced by an overlong injection length leads to the matched phase space distribution.« less
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
Simulation study of interactions of Space Shuttle-generated electron beams with ambient plasmas
NASA Technical Reports Server (NTRS)
Lin, Chin S.
1992-01-01
This report summarizes results obtained through the support of NASA Grant NAGW-1936. The objective of this report is to conduct large scale simulations of electron beams injected into space. The topics covered include the following: (1) simulation of radial expansion of an injected electron beam; (2) simulations of the active injections of electron beams; (3) parameter study of electron beam injection into an ionospheric plasma; and (4) magnetosheath-ionospheric plasma interactions in the cusp.
Implementation of Hydrodynamic Simulation Code in Shock Experiment Design for Alkali Metals
NASA Astrophysics Data System (ADS)
Coleman, A. L.; Briggs, R.; Gorman, M. G.; Ali, S.; Lazicki, A.; Swift, D. C.; Stubley, P. G.; McBride, E. E.; Collins, G.; Wark, J. S.; McMahon, M. I.
2017-10-01
Shock compression techniques enable the investigation of extreme P-T states. In order to probe off-Hugoniot regions of P-T space, target makeup and laser pulse parameters must be carefully designed. HYADES is a hydrodynamic simulation code which has been successfully utilised to simulate shock compression events and refine the experimental parameters required in order to explore new P-T states in alkali metals. Here we describe simulations and experiments on potassium, along with the techniques required to access off-Hugoniot states.
Creating Simulated Microgravity Patient Models
NASA Technical Reports Server (NTRS)
Hurst, Victor; Doerr, Harold K.; Bacal, Kira
2004-01-01
The Medical Operational Support Team (MOST) has been tasked by the Space and Life Sciences Directorate (SLSD) at the NASA Johnson Space Center (JSC) to integrate medical simulation into 1) medical training for ground and flight crews and into 2) evaluations of medical procedures and equipment for the International Space Station (ISS). To do this, the MOST requires patient models that represent the physiological changes observed during spaceflight. Despite the presence of physiological data collected during spaceflight, there is no defined set of parameters that illustrate or mimic a 'space normal' patient. Methods: The MOST culled space-relevant medical literature and data from clinical studies performed in microgravity environments. The areas of focus for data collection were in the fields of cardiovascular, respiratory and renal physiology. Results: The MOST developed evidence-based patient models that mimic the physiology believed to be induced by human exposure to a microgravity environment. These models have been integrated into space-relevant scenarios using a human patient simulator and ISS medical resources. Discussion: Despite the lack of a set of physiological parameters representing 'space normal,' the MOST developed space-relevant patient models that mimic microgravity-induced changes in terrestrial physiology. These models are used in clinical scenarios that will medically train flight surgeons, biomedical flight controllers (biomedical engineers; BME) and, eventually, astronaut-crew medical officers (CMO).
NASA Astrophysics Data System (ADS)
Kang, Jai Young
2005-12-01
The objectives of this study are to perform extensive analysis on internal mass motion for a wider parameter space and to provide suitable design criteria for a broader applicability for the class of spinning space vehicles. In order to examine the stability criterion determined by a perturbation method, some numerical simulations will be performed and compared at various parameter points. In this paper, Ince-Strutt diagram for determination of stable-unstable regions of the internal mass motion of the spinning thrusting space vehicle in terms of design parameters will be obtained by an analytical method. Also, phase trajectories of the motion will be obtained for various parameter values and their characteristics are compared.
Space-flight simulations of calcium metabolism using a mathematical model of calcium regulation
NASA Technical Reports Server (NTRS)
Brand, S. N.
1985-01-01
The results of a series of simulation studies of calcium matabolic changes which have been recorded during human exposure to bed rest and space flight are presented. Space flight and bed rest data demonstrate losses of total body calcium during exposure to hypogravic environments. These losses are evidenced by higher than normal rates of urine calcium excretion and by negative calcium balances. In addition, intestinal absorption rates and bone mineral content are assumed to decrease. The bed rest and space flight simulations were executed on a mathematical model of the calcium metabolic system. The purpose of the simulations is to theoretically test hypotheses and predict system responses which are occurring during given experimental stresses. In this case, hypogravity occurs through the comparison of simulation and experimental data and through the analysis of model structure and system responses. The model reliably simulates the responses of selected bed rest and space flight parameters. When experimental data are available, the simulated skeletal responses and regulatory factors involved in the responses agree with space flight data collected on rodents. In addition, areas within the model that need improvement are identified.
NASA Astrophysics Data System (ADS)
Nelson, Hunter Barton
A simplified second-order transfer function actuator model used in most flight dynamics applications cannot easily capture the effects of different actuator parameters. The present work integrates a nonlinear actuator model into a nonlinear state space rotorcraft model to determine the effect of actuator parameters on key flight dynamics. The completed actuator model was integrated with a swashplate kinematics where step responses were generated over a range of key hydraulic parameters. The actuator-swashplate system was then introduced into a nonlinear state space rotorcraft simulation where flight dynamics quantities such as bandwidth and phase delay analyzed. Frequency sweeps were simulated for unique actuator configurations using the coupled nonlinear actuator-rotorcraft system. The software package CIFER was used for system identification and compared directly to the linearized models. As the actuator became rate saturated, the effects on bandwidth and phase delay were apparent on the predicted handling qualities specifications.
Modeling the long-term evolution of space debris
Nikolaev, Sergei; De Vries, Willem H.; Henderson, John R.; Horsley, Matthew A.; Jiang, Ming; Levatin, Joanne L.; Olivier, Scot S.; Pertica, Alexander J.; Phillion, Donald W.; Springer, Harry K.
2017-03-07
A space object modeling system that models the evolution of space debris is provided. The modeling system simulates interaction of space objects at simulation times throughout a simulation period. The modeling system includes a propagator that calculates the position of each object at each simulation time based on orbital parameters. The modeling system also includes a collision detector that, for each pair of objects at each simulation time, performs a collision analysis. When the distance between objects satisfies a conjunction criterion, the modeling system calculates a local minimum distance between the pair of objects based on a curve fitting to identify a time of closest approach at the simulation times and calculating the position of the objects at the identified time. When the local minimum distance satisfies a collision criterion, the modeling system models the debris created by the collision of the pair of objects.
Development of space simulation / net-laboratory system
NASA Astrophysics Data System (ADS)
Usui, H.; Matsumoto, H.; Ogino, T.; Fujimoto, M.; Omura, Y.; Okada, M.; Ueda, H. O.; Murata, T.; Kamide, Y.; Shinagawa, H.; Watanabe, S.; Machida, S.; Hada, T.
A research project for the development of space simulation / net-laboratory system was approved by Japan Science and Technology Corporation (JST) in the category of Research and Development for Applying Advanced Computational Science and Technology(ACT-JST) in 2000. This research project, which continues for three years, is a collaboration with an astrophysical simulation group as well as other space simulation groups which use MHD and hybrid models. In this project, we develop a proto type of unique simulation system which enables us to perform simulation runs by providing or selecting plasma parameters through Web-based interface on the internet. We are also developing an on-line database system for space simulation from which we will be able to search and extract various information such as simulation method and program, manuals, and typical simulation results in graphic or ascii format. This unique system will help the simulation beginners to start simulation study without much difficulty or effort, and contribute to the promotion of simulation studies in the STP field. In this presentation, we will report the overview and the current status of the project.
Ascent trajectory dispersion analysis for WTR heads-up space shuttle trajectory
NASA Technical Reports Server (NTRS)
1986-01-01
The results of a Space Transportation System ascent trajectory dispersion analysis are discussed. The purpose is to provide critical trajectory parameter values for assessing the Space Shuttle in a heads-up configuration launched from the Western Test Range (STR). This analysis was conducted using a trajectory profile based on a launch from the WTR in December. The analysis consisted of the following steps: (1) nominal trajectories were simulated under the conditions as specified by baseline reference mission guidelines; (2) dispersion trajectories were simulated using predetermined parametric variations; (3) requirements for a system-related composite trajectory were determined by a root-sum-square (RSS) analysis of the positive deviations between values of the aerodynamic heating indicator (AHI) generated by the dispersion and nominal trajectories; (4) using the RSS assessment as a guideline, the system related composite trajectory was simulated by combinations of dispersion parameters which represented major contributors; (5) an assessment of environmental perturbations via a RSS analysis was made by the combination of plus or minus 2 sigma atmospheric density variation and 95% directional design wind dispersions; (6) maximum aerodynamic heating trajectories were simulated by variation of dispersion parameters which would emulate the summation of the system-related RSS and environmental RSS values of AHI. The maximum aerodynamic heating trajectories were simulated consistent with the directional winds used in the environmental analysis.
A hybrid method of estimating pulsating flow parameters in the space-time domain
NASA Astrophysics Data System (ADS)
Pałczyński, Tomasz
2017-05-01
This paper presents a method for estimating pulsating flow parameters in partially open pipes, such as pipelines, internal combustion engine inlets, exhaust pipes and piston compressors. The procedure is based on the method of characteristics, and employs a combination of measurements and simulations. An experimental test rig is described, which enables pressure, temperature and mass flow rate to be measured within a defined cross section. The second part of the paper discusses the main assumptions of a simulation algorithm elaborated in the Matlab/Simulink environment. The simulation results are shown as 3D plots in the space-time domain, and compared with proposed models of phenomena relating to wave propagation, boundary conditions, acoustics and fluid mechanics. The simulation results are finally compared with acoustic phenomena, with an emphasis on the identification of resonant frequencies.
Population Synthesis of Radio & Gamma-Ray Millisecond Pulsars
NASA Astrophysics Data System (ADS)
Frederick, Sara; Gonthier, P. L.; Harding, A. K.
2014-01-01
In recent years, the number of known gamma-ray millisecond pulsars (MSPs) in the Galactic disk has risen substantially thanks to confirmed detections by Fermi Gamma-ray Space Telescope (Fermi). We have developed a new population synthesis of gamma-ray and radio MSPs in the galaxy which uses Markov Chain Monte Carlo techniques to explore the large and small worlds of the model parameter space and allows for comparisons of the simulated and detected MSP distributions. The simulation employs empirical radio and gamma-ray luminosity models that are dependent upon the pulsar period and period derivative with freely varying exponents. Parameters associated with the birth distributions are also free to vary. The computer code adjusts the magnitudes of the model luminosities to reproduce the number of MSPs detected by a group of ten radio surveys, thus normalizing the simulation and predicting the MSP birth rates in the Galaxy. Computing many Markov chains leads to preferred sets of model parameters that are further explored through two statistical methods. Marginalized plots define confidence regions in the model parameter space using maximum likelihood methods. A secondary set of confidence regions is determined in parallel using Kuiper statistics calculated from comparisons of cumulative distributions. These two techniques provide feedback to affirm the results and to check for consistency. Radio flux and dispersion measure constraints have been imposed on the simulated gamma-ray distributions in order to reproduce realistic detection conditions. The simulated and detected distributions agree well for both sets of radio and gamma-ray pulsar characteristics, as evidenced by our various comparisons.
NASA Technical Reports Server (NTRS)
Jones, L. D.
1979-01-01
The Space Environment Test Division Post-Test Data Reduction Program processes data from test history tapes generated on the Flexible Data System in the Space Environment Simulation Laboratory at the National Aeronautics and Space Administration/Lyndon B. Johnson Space Center. The program reads the tape's data base records to retrieve the item directory conversion file, the item capture file and the process link file to determine the active parameters. The desired parameter names are read in by lead cards after which the periodic data records are read to determine parameter data level changes. The data is considered to be compressed rather than full sample rate. Tabulations and/or a tape for generating plots may be output.
Siksik, May; Krishnamurthy, Vikram
2017-09-01
This paper proposes a multi-dielectric Brownian dynamics simulation framework for design-space-exploration (DSE) studies of ion-channel permeation. The goal of such DSE studies is to estimate the channel modeling-parameters that minimize the mean-squared error between the simulated and expected "permeation characteristics." To address this computational challenge, we use a methodology based on statistical inference that utilizes the knowledge of channel structure to prune the design space. We demonstrate the proposed framework and DSE methodology using a case study based on the KcsA ion channel, in which the design space is successfully reduced from a 6-D space to a 2-D space. Our results show that the channel dielectric map computed using the framework matches with that computed directly using molecular dynamics with an error of 7%. Finally, the scalability and resolution of the model used are explored, and it is shown that the memory requirements needed for DSE remain constant as the number of parameters (degree of heterogeneity) increases.
NASA Astrophysics Data System (ADS)
Cihangir Çamur, Kübra; Roshani, Mehdi; Pirouzi, Sania
2017-10-01
In studying the urban complex issues, simulation and modelling of public space use considerably helps in determining and measuring factors such as urban safety. Depth map software for determining parameters of the spatial layout techniques; and Statistical Package for Social Sciences (SPSS) software for analysing and evaluating the views of the pedestrians on public safety were used in this study. Connectivity, integration, and depth of the area in the Tarbiat city blocks were measured using the Space Syntax Method, and these parameters are presented as graphical and mathematical data. The combination of the results obtained from the questionnaire and statistical analysis with the results of spatial arrangement technique represents the appropriate and inappropriate spaces for pedestrians. This method provides a useful and effective instrument for decision makers, planners, urban designers and programmers in order to evaluate public spaces in the city. Prior to physical modification of urban public spaces, space syntax simulates the pedestrian safety to be used as an analytical tool by the city management. Finally, regarding the modelled parameters and identification of different characteristics of the case, this study represents the strategies and policies in order to increase the safety of the pedestrians of Tarbiat in Tabriz.
NASA Technical Reports Server (NTRS)
Stevens, N. J.
1979-01-01
Cases where the charged-particle environment acts on the spacecraft (e.g., spacecraft charging phenomena) and cases where a system on the spacecraft causes the interaction (e.g., high voltage space power systems) are considered. Both categories were studied in ground simulation facilities to understand the processes involved and to measure the pertinent parameters. Computer simulations are based on the NASA Charging Analyzer Program (NASCAP) code. Analytical models are developed in this code and verified against the experimental data. Extrapolation from the small test samples to space conditions are made with this code. Typical results from laboratory and computer simulations are presented for both types of interactions. Extrapolations from these simulations to performance in space environments are discussed.
Space-based infrared sensors of space target imaging effect analysis
NASA Astrophysics Data System (ADS)
Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang
2018-02-01
Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.
Transport regimes spanning magnetization-coupling phase space
NASA Astrophysics Data System (ADS)
Baalrud, Scott D.; Daligault, Jérôme
2017-10-01
The manner in which transport properties vary over the entire parameter-space of coupling and magnetization strength is explored. Four regimes are identified based on the relative size of the gyroradius compared to other fundamental length scales: the collision mean free path, Debye length, distance of closest approach, and interparticle spacing. Molecular dynamics simulations of self-diffusion and temperature anisotropy relaxation spanning the parameter space are found to agree well with the predicted boundaries. Comparison with existing theories reveals regimes where they succeed, where they fail, and where no theory has yet been developed.
Systematic simulations of modified gravity: chameleon models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brax, Philippe; Davis, Anne-Christine; Li, Baojiu
2013-04-01
In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference withmore » the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.« less
Numerical simulations of merging black holes for gravitational-wave astronomy
NASA Astrophysics Data System (ADS)
Lovelace, Geoffrey
2014-03-01
Gravitational waves from merging binary black holes (BBHs) are among the most promising sources for current and future gravitational-wave detectors. Accurate models of these waves are necessary to maximize the number of detections and our knowledge of the waves' sources; near the time of merger, the waves can only be computed using numerical-relativity simulations. For optimal application to gravitational-wave astronomy, BBH simulations must achieve sufficient accuracy and length, and all relevant regions of the BBH parameter space must be covered. While great progress toward these goals has been made in the almost nine years since BBH simulations became possible, considerable challenges remain. In this talk, I will discuss current efforts to meet these challenges, and I will present recent BBH simulations produced using the Spectral Einstein Code, including a catalog of publicly available gravitational waveforms [black-holes.org/waveforms]. I will also discuss simulations of merging black holes with high mass ratios and with spins nearly as fast as possible, the most challenging regions of the BBH parameter space.
NASA Astrophysics Data System (ADS)
Kern, Nicholas S.; Liu, Adrian; Parsons, Aaron R.; Mesinger, Andrei; Greig, Bradley
2017-10-01
Current and upcoming radio interferometric experiments are aiming to make a statistical characterization of the high-redshift 21 cm fluctuation signal spanning the hydrogen reionization and X-ray heating epochs of the universe. However, connecting 21 cm statistics to the underlying physical parameters is complicated by the theoretical challenge of modeling the relevant physics at computational speeds quick enough to enable exploration of the high-dimensional and weakly constrained parameter space. In this work, we use machine learning algorithms to build a fast emulator that can accurately mimic an expensive simulation of the 21 cm signal across a wide parameter space. We embed our emulator within a Markov Chain Monte Carlo framework in order to perform Bayesian parameter constraints over a large number of model parameters, including those that govern the Epoch of Reionization, the Epoch of X-ray Heating, and cosmology. As a worked example, we use our emulator to present an updated parameter constraint forecast for the Hydrogen Epoch of Reionization Array experiment, showing that its characterization of a fiducial 21 cm power spectrum will considerably narrow the allowed parameter space of reionization and heating parameters, and could help strengthen Planck's constraints on {σ }8. We provide both our generalized emulator code and its implementation specifically for 21 cm parameter constraints as publicly available software.
Koski, Jason P; Riggleman, Robert A
2017-04-28
Block copolymers, due to their ability to self-assemble into periodic structures with long range order, are appealing candidates to control the ordering of functionalized nanoparticles where it is well-accepted that the spatial distribution of nanoparticles in a polymer matrix dictates the resulting material properties. The large parameter space associated with block copolymer nanocomposites makes theory and simulation tools appealing to guide experiments and effectively isolate parameters of interest. We demonstrate a method for performing field-theoretic simulations in a constant volume-constant interfacial tension ensemble (nVγT) that enables the determination of the equilibrium properties of block copolymer nanocomposites, including when the composites are placed under tensile or compressive loads. Our approach is compatible with the complex Langevin simulation framework, which allows us to go beyond the mean-field approximation. We validate our approach by comparing our nVγT approach with free energy calculations to determine the ideal domain spacing and modulus of a symmetric block copolymer melt. We analyze the effect of numerical and thermodynamic parameters on the efficiency of the nVγT ensemble and subsequently use our method to investigate the ideal domain spacing, modulus, and nanoparticle distribution of a lamellar forming block copolymer nanocomposite. We find that the nanoparticle distribution is directly linked to the resultant domain spacing and is dependent on polymer chain density, nanoparticle size, and nanoparticle chemistry. Furthermore, placing the system under tension or compression can qualitatively alter the nanoparticle distribution within the block copolymer.
Hierarchical multistage MCMC follow-up of continuous gravitational wave candidates
NASA Astrophysics Data System (ADS)
Ashton, G.; Prix, R.
2018-05-01
Leveraging Markov chain Monte Carlo optimization of the F statistic, we introduce a method for the hierarchical follow-up of continuous gravitational wave candidates identified by wide-parameter space semicoherent searches. We demonstrate parameter estimation for continuous wave sources and develop a framework and tools to understand and control the effective size of the parameter space, critical to the success of the method. Monte Carlo tests of simulated signals in noise demonstrate that this method is close to the theoretical optimal performance.
NASA Astrophysics Data System (ADS)
Carrico, T.; Langster, T.; Carrico, J.; Alfano, S.; Loucks, M.; Vallado, D.
The authors present several spacecraft rendezvous and close proximity maneuvering techniques modeled with a high-precision numerical integrator using full force models and closed loop control with a Fuzzy Logic intelligent controller to command the engines. The authors document and compare the maneuvers, fuel use, and other parameters. This paper presents an innovative application of an existing capability to design, simulate and analyze proximity maneuvers; already in use for operational satellites performing other maneuvers. The system has been extended to demonstrate the capability to develop closed loop control laws to maneuver spacecraft in close proximity to another, including stand-off, docking, lunar landing and other operations applicable to space situational awareness, space based surveillance, and operational satellite modeling. The fully integrated end-to-end trajectory ephemerides are available from the authors in electronic ASCII text by request. The benefits of this system include: A realistic physics-based simulation for the development and validation of control laws A collaborative engineering environment for the design, development and tuning of spacecraft law parameters, sizing actuators (i.e., rocket engines), and sensor suite selection. An accurate simulation and visualization to communicate the complexity, criticality, and risk of spacecraft operations. A precise mathematical environment for research and development of future spacecraft maneuvering engineering tasks, operational planning and forensic analysis. A closed loop, knowledge-based control example for proximity operations. This proximity operations modeling and simulation environment will provide a valuable adjunct to programs in military space control, space situational awareness and civil space exploration engineering and decision making processes.
Evolutionary Design and Simulation of a Tube Crawling Inspection Robot
NASA Technical Reports Server (NTRS)
Craft, Michael; Howsman, Tom; ONeil, Daniel; Howell, Joe T. (Technical Monitor)
2002-01-01
The Space Robotics Assembly Team Simulation (SpaceRATS) is an expansive concept that will hopefully lead to a space flight demonstration of a robotic team cooperatively assembling a system from its constitutive parts. A primary objective of the SpaceRATS project is to develop a generalized evolutionary design approach for multiple classes of robots. The portion of the overall SpaceRats program associated with the evolutionary design and simulation of an inspection robot's morphology is the subject of this paper. The vast majority of this effort has concentrated on the use and modification of Darwin2K, a robotic design and simulation software package, to analyze the design of a tube crawling robot. This robot is designed for carrying out inspection duties in relatively inaccessible locations within a liquid rocket engine similar to the SSME. A preliminary design of the tube crawler robot was completed, and the mechanical dynamics of the system were simulated. An evolutionary approach to optimizing a few parameters of the system was utilized, resulting in a more optimum design.
Parametric Analysis of a Hover Test Vehicle using Advanced Test Generation and Data Analysis
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen; Schumann, Johann; Menzies, Tim; Barrett, Tony
2009-01-01
Large complex aerospace systems are generally validated in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. This is due to the large parameter space, and complex, highly coupled nonlinear nature of the different systems that contribute to the performance of the aerospace system. We have addressed the factors deterring such an analysis by applying a combination of technologies to the area of flight envelop assessment. We utilize n-factor (2,3) combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. The data generated is automatically analyzed through a combination of unsupervised learning using a Bayesian multivariate clustering technique (AutoBayes) and supervised learning of critical parameter ranges using the machine-learning tool TAR3, a treatment learner. Covariance analysis with scatter plots and likelihood contours are used to visualize correlations between simulation parameters and simulation results, a task that requires tool support, especially for large and complex models. We present results of simulation experiments for a cold-gas-powered hover test vehicle.
Impact of Ice Morphology on Design Space of Pharmaceutical Freeze-Drying.
Goshima, Hiroshika; Do, Gabsoo; Nakagawa, Kyuya
2016-06-01
It has been known that the sublimation kinetics of a freeze-drying product is affected by its internal ice crystal microstructures. This article demonstrates the impact of the ice morphologies of a frozen formulation in a vial on the design space for the primary drying of a pharmaceutical freeze-drying process. Cross-sectional images of frozen sucrose-bovine serum albumin aqueous solutions were optically observed and digital pictures were acquired. Binary images were obtained from the optical data to extract the geometrical parameters (i.e., ice crystal size and tortuosity) that relate to the mass-transfer resistance of water vapor during the primary drying step. A mathematical model was used to simulate the primary drying kinetics and provided the design space for the process. The simulation results predicted that the geometrical parameters of frozen solutions significantly affect the design space, with large and less tortuous ice morphologies resulting in wide design spaces and vice versa. The optimal applicable drying conditions are influenced by the ice morphologies. Therefore, owing to the spatial distributions of the geometrical parameters of a product, the boundary curves of the design space are variable and could be tuned by controlling the ice morphologies. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
An integrated system for dynamic control of auditory perspective in a multichannel sound field
NASA Astrophysics Data System (ADS)
Corey, Jason Andrew
An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.
Locating and defining underground goaf caused by coal mining from space-borne SAR interferometry
NASA Astrophysics Data System (ADS)
Yang, Zefa; Li, Zhiwei; Zhu, Jianjun; Yi, Huiwei; Feng, Guangcai; Hu, Jun; Wu, Lixin; Preusse, Alex; Wang, Yunjia; Papst, Markus
2018-01-01
It is crucial to locate underground goafs (i.e., mined-out areas) resulting from coal mining and define their spatial dimensions for effectively controlling the induced damages and geohazards. Traditional geophysical techniques for locating and defining underground goafs, however, are ground-based, labour-consuming and costly. This paper presents a novel space-based method for locating and defining the underground goaf caused by coal extraction using Interferometric Synthetic Aperture Radar (InSAR) techniques. As the coal mining-induced goaf is often a cuboid-shaped void and eight critical geometric parameters (i.e., length, width, height, inclined angle, azimuth angle, mining depth, and two central geodetic coordinates) are capable of locating and defining this underground space, the proposed method reduces to determine the eight geometric parameters from InSAR observations. Therefore, it first applies the Probability Integral Method (PIM), a widely used model for mining-induced deformation prediction, to construct a functional relationship between the eight geometric parameters and the InSAR-derived surface deformation. Next, the method estimates these geometric parameters from the InSAR-derived deformation observations using a hybrid simulated annealing and genetic algorithm. Finally, the proposed method was tested with both simulated and two real data sets. The results demonstrate that the estimated geometric parameters of the goafs are accurate and compatible overall, with averaged relative errors of approximately 2.1% and 8.1% being observed for the simulated and the real data experiments, respectively. Owing to the advantages of the InSAR observations, the proposed method provides a non-contact, convenient and practical method for economically locating and defining underground goafs in a large spatial area from space.
NASA Astrophysics Data System (ADS)
Scolini, C.; Verbeke, C.; Gopalswamy, N.; Wijsen, N.; Poedts, S.; Mierla, M.; Rodriguez, L.; Pomoell, J.; Cramer, W. D.; Raeder, J.
2017-12-01
Coronal Mass Ejections (CMEs) and their interplanetary counterparts are considered to be the major space weather drivers. An accurate modelling of their onset and propagation up to 1 AU represents a key issue for more reliable space weather forecasts, and predictions about their actual geo-effectiveness can only be performed by coupling global heliospheric models to 3D models describing the terrestrial environment, e.g. magnetospheric and ionospheric codes in the first place. In this work we perform a Sun-to-Earth comprehensive analysis of the July 12, 2012 CME with the aim of testing the space weather predictive capabilities of the newly developed EUHFORIA heliospheric model integrated with the Gibson-Low (GL) flux rope model. In order to achieve this goal, we make use of a model chain approach by using EUHFORIA outputs at Earth as input parameters for the OpenGGCM magnetospheric model. We first reconstruct the CME kinematic parameters by means of single- and multi- spacecraft reconstruction methods based on coronagraphic and heliospheric CME observations. The magnetic field-related parameters of the flux rope are estimated based on imaging observations of the photospheric and low coronal source regions of the eruption. We then simulate the event with EUHFORIA, testing the effect of the different CME kinematic input parameters on simulation results at L1. We compare simulation outputs with in-situ measurements of the Interplanetary CME and we use them as input for the OpenGGCM model, so to investigate the magnetospheric response to solar perturbations. From simulation outputs we extract some global geomagnetic activity indexes and compare them with actual data records and with results obtained by the use of empirical relations. Finally, we discuss the forecasting capabilities of such kind of approach and its future improvements.
Transport regimes spanning magnetization-coupling phase space
Baalrud, Scott D.; Daligault, Jérôme
2017-10-06
The manner in which transport properties vary over the entire parameter-space of coupling and magnetization strength is explored in this paper. Four regimes are identified based on the relative size of the gyroradius compared to other fundamental length scales: the collision mean free path, Debye length, distance of closest approach, and interparticle spacing. Molecular dynamics simulations of self-diffusion and temperature anisotropy relaxation spanning the parameter space are found to agree well with the predicted boundaries. Finally, comparison with existing theories reveals regimes where they succeed, where they fail, and where no theory has yet been developed.
Transport regimes spanning magnetization-coupling phase space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baalrud, Scott D.; Daligault, Jérôme
The manner in which transport properties vary over the entire parameter-space of coupling and magnetization strength is explored in this paper. Four regimes are identified based on the relative size of the gyroradius compared to other fundamental length scales: the collision mean free path, Debye length, distance of closest approach, and interparticle spacing. Molecular dynamics simulations of self-diffusion and temperature anisotropy relaxation spanning the parameter space are found to agree well with the predicted boundaries. Finally, comparison with existing theories reveals regimes where they succeed, where they fail, and where no theory has yet been developed.
NASA Technical Reports Server (NTRS)
Funk, Joan G.; Sykes, George F., Jr.
1989-01-01
The effects of simulated space environmental parameters on microdamage induced by the environment in a series of commercially available graphite-fiber-reinforced composite materials were determined. Composites with both thermoset and thermoplastic resin systems were studied. Low-Earth-Orbit (LEO) exposures were simulated by thermal cycling; geosynchronous-orbit (GEO) exposures were simulated by electron irradiation plus thermal cycling. The thermal cycling temperature range was -250 F to either 200 F or 150 F. The upper limits of the thermal cycles were different to ensure that an individual composite material was not cycled above its glass transition temperature. Material response was characterized through assessment of the induced microcracking and its influence on mechanical property changes at both room temperature and -250 F. Microdamage was induced in both thermoset and thermoplastic advanced composite materials exposed to the simulated LEO environment. However, a 350 F cure single-phase toughened epoxy composite was not damaged during exposure to the LEO environment. The simuated GEO environment produced microdamage in all materials tested.
Modeling Advance Life Support Systems
NASA Technical Reports Server (NTRS)
Pitts, Marvin; Sager, John; Loader, Coleen; Drysdale, Alan
1996-01-01
Activities this summer consisted of two projects that involved computer simulation of bioregenerative life support systems for space habitats. Students in the Space Life Science Training Program (SLSTP) used the simulation, space station, to learn about relationships between humans, fish, plants, and microorganisms in a closed environment. One student complete a six week project to modify the simulation by converting the microbes from anaerobic to aerobic, and then balancing the simulation's life support system. A detailed computer simulation of a closed lunar station using bioregenerative life support was attempted, but there was not enough known about system restraints and constants in plant growth, bioreactor design for space habitats and food preparation to develop an integrated model with any confidence. Instead of a completed detailed model with broad assumptions concerning the unknown system parameters, a framework for an integrated model was outlined and work begun on plant and bioreactor simulations. The NASA sponsors and the summer Fell were satisfied with the progress made during the 10 weeks, and we have planned future cooperative work.
The Mission Planning Lab: A Visualization and Analysis Tool
NASA Technical Reports Server (NTRS)
Daugherty, Sarah C.; Cervantes, Benjamin W.
2009-01-01
Simulation and visualization are powerful decision making tools that are time-saving and cost-effective. Space missions pose testing and e valuation challenges that can be overcome through modeling, simulatio n, and visualization of mission parameters. The National Aeronautics and Space Administration?s (NASA) Wallops Flight Facility (WFF) capi talizes on the benefits of modeling, simulation, and visualization to ols through a project initiative called The Mission Planning Lab (MPL ).
Tool Support for Parametric Analysis of Large Software Simulation Systems
NASA Technical Reports Server (NTRS)
Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony
2008-01-01
The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.
Gholami, Somayeh; Nedaie, Hassan Ali; Longo, Francesco; Ay, Mohammad Reza; Dini, Sharifeh A.; Meigooni, Ali S.
2017-01-01
Purpose: The clinical efficacy of Grid therapy has been examined by several investigators. In this project, the hole diameter and hole spacing in Grid blocks were examined to determine the optimum parameters that give a therapeutic advantage. Methods: The evaluations were performed using Monte Carlo (MC) simulation and commonly used radiobiological models. The Geant4 MC code was used to simulate the dose distributions for 25 different Grid blocks with different hole diameters and center-to-center spacing. The therapeutic parameters of these blocks, namely, the therapeutic ratio (TR) and geometrical sparing factor (GSF) were calculated using two different radiobiological models, including the linear quadratic and Hug–Kellerer models. In addition, the ratio of the open to blocked area (ROTBA) is also used as a geometrical parameter for each block design. Comparisons of the TR, GSF, and ROTBA for all of the blocks were used to derive the parameters for an optimum Grid block with the maximum TR, minimum GSF, and optimal ROTBA. A sample of the optimum Grid block was fabricated at our institution. Dosimetric characteristics of this Grid block were measured using an ionization chamber in water phantom, Gafchromic film, and thermoluminescent dosimeters in Solid Water™ phantom materials. Results: The results of these investigations indicated that Grid blocks with hole diameters between 1.00 and 1.25 cm and spacing of 1.7 or 1.8 cm have optimal therapeutic parameters (TR > 1.3 and GSF~0.90). The measured dosimetric characteristics of the optimum Grid blocks including dose profiles, percentage depth dose, dose output factor (cGy/MU), and valley-to-peak ratio were in good agreement (±5%) with the simulated data. Conclusion: In summary, using MC-based dosimetry, two radiobiological models, and previously published clinical data, we have introduced a method to design a Grid block with optimum therapeutic response. The simulated data were reproduced by experimental data. PMID:29296035
Remote control circuit breaker evaluation testing. [for space shuttles
NASA Technical Reports Server (NTRS)
Bemko, L. M.
1974-01-01
Engineering evaluation tests were performed on several models/types of remote control circuit breakers marketed in an attempt to gain some insight into their potential suitability for use on the space shuttle vehicle. Tests included the measurement of several electrical and operational performance parameters under laboratory ambient, space simulation, acceleration and vibration environmental conditions.
Extra Solar Planet Science With a Non Redundant Mask
NASA Astrophysics Data System (ADS)
Minto, Stefenie Nicolet; Sivaramakrishnan, Anand; Greenbaum, Alexandra; St. Laurent, Kathryn; Thatte, Deeparshi
2017-01-01
To detect faint planetary companions near a much brighter star, at the Resolution Limit of the James Webb Space Telescope (JWST) the Near-Infrared Imager and Slitless Spectrograph (NIRISS) will use a non-redundant aperture mask (NRM) for high contrast imaging. I simulated NIRISS data of stars with and without planets, and run these through the code that measures interferometric image properties to determine how sensitive planetary detection is to our knowledge of instrumental parameters, starting with the pixel scale. I measured the position angle, distance, and contrast ratio of the planet (with respect to the star) to characterize the binary pair. To organize this data I am creating programs that will automatically and systematically explore multi-dimensional instrument parameter spaces and binary characteristics. In the future my code will also be applied to explore any other parameters we can simulate.
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
Systematic parameter inference in stochastic mesoscopic modeling
NASA Astrophysics Data System (ADS)
Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.
HiPEP Ion Optics System Evaluation Using Gridlets
NASA Technical Reports Server (NTRS)
Willliams, John D.; Farnell, Cody C.; Laufer, D. Mark; Martinez, Rafael A.
2004-01-01
Experimental measurements are presented for sub-scale ion optics systems comprised of 7 and 19 aperture pairs with geometrical features that are similar to the HiPEP ion optics system. Effects of hole diameter and grid-to-grid spacing are presented as functions of applied voltage and beamlet current. Recommendations are made for the beamlet current range where the ion optics system can be safely operated without experiencing direct impingement of high energy ions on the accelerator grid surface. Measurements are also presented of the accelerator grid voltage where beam plasma electrons backstream through the ion optics system. Results of numerical simulations obtained with the ffx code are compared to both the impingement limit and backstreaming measurements. An emphasis is placed on identifying differences between measurements and simulation predictions to highlight areas where more research is needed. Relatively large effects are observed in simulations when the discharge chamber plasma properties and ion optics geometry are varied. Parameters investigated using simulations include the applied voltages, grid spacing, hole-to-hole spacing, doubles-to-singles ratio, plasma potential, and electron temperature; and estimates are provided for the sensitivity of impingement limits on these parameters.
NASA Astrophysics Data System (ADS)
Bennett, Katrina E.; Urrego Blanco, Jorge R.; Jonko, Alexandra; Bohn, Theodore J.; Atchley, Adam L.; Urban, Nathan M.; Middleton, Richard S.
2018-01-01
The Colorado River Basin is a fundamentally important river for society, ecology, and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent, and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model. We combine global sensitivity analysis with a space-filling Latin Hypercube Sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach. We find that snow-dominated regions are much more sensitive to uncertainties in VIC parameters. Although baseflow and runoff changes respond to parameters used in previous sensitivity studies, we discover new key parameter sensitivities. For instance, changes in runoff and evapotranspiration are sensitive to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI) in the VIC model. It is critical for improved modeling to narrow uncertainty in these parameters through improved observations and field studies. This is important because LAI and albedo are anticipated to change under future climate and narrowing uncertainty is paramount to advance our application of models such as VIC for water resource management.
Use of the Marshall Space Flight Center solar simulator in collector performance evaluation
NASA Technical Reports Server (NTRS)
Humphries, W. R.
1978-01-01
Actual measured values from simulator checkout tests are detailed. Problems encountered during initial startup are discussed and solutions described. Techniques utilized to evaluate collector performance from simulator test data are given. Performance data generated in the simulator are compared to equivalent data generated during natural outdoor testing. Finally, a summary of collector performance parameters generated to date as a result of simulator testing are given.
NASA Technical Reports Server (NTRS)
1973-01-01
The HD 220 program was created as part of the space shuttle solid rocket booster recovery system definition. The model was generated to investigate the damage to SRB components under water impact loads. The random nature of environmental parameters, such as ocean waves and wind conditions, necessitates estimation of the relative frequency of occurrence for these parameters. The nondeterministic nature of component strengths also lends itself to probabilistic simulation. The Monte Carlo technique allows the simultaneous perturbation of multiple independent parameters and provides outputs describing the probability distribution functions of the dependent parameters. This allows the user to determine the required statistics for each output parameter.
Paliwal, Himanshu; Shirts, Michael R
2013-11-12
Multistate reweighting methods such as the multistate Bennett acceptance ratio (MBAR) can predict free energies and expectation values of thermodynamic observables at poorly sampled or unsampled thermodynamic states using simulations performed at only a few sampled states combined with single point energy reevaluations of these samples at the unsampled states. In this study, we demonstrate the power of this general reweighting formalism by exploring the effect of simulation parameters controlling Coulomb and Lennard-Jones cutoffs on free energy calculations and other observables. Using multistate reweighting, we can quickly identify, with very high sensitivity, the computationally least expensive nonbonded parameters required to obtain a specified accuracy in observables compared to the answer obtained using an expensive "gold standard" set of parameters. We specifically examine free energy estimates of three molecular transformations in a benchmark molecular set as well as the enthalpy of vaporization of TIP3P. The results demonstrates the power of this multistate reweighting approach for measuring changes in free energy differences or other estimators with respect to simulation or model parameters with very high precision and/or very low computational effort. The results also help to identify which simulation parameters affect free energy calculations and provide guidance to determine which simulation parameters are both appropriate and computationally efficient in general.
Population Synthesis of Radio and Y-ray Millisecond Pulsars Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Gonthier, Peter L.; Billman, C.; Harding, A. K.
2013-04-01
We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and γ-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of ten radio surveys and by Fermi, predicting the MSP birth rate in the Galaxy. We follow a similar set of assumptions that we have used in previous, more constrained Monte Carlo simulations. The parameters associated with the birth distributions such as those for the accretion rate, magnetic field and period distributions are also free to vary. With the large set of free parameters, we employ Markov Chain Monte Carlo simulations to explore the large and small worlds of the parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and γ-ray pulsar characteristics. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.
Determination of Thermal State of Charge in Solar Heat Receivers
NASA Technical Reports Server (NTRS)
Glakpe, E. K.; Cannon, J. N.; Hall, C. A., III; Grimmett, I. W.
1996-01-01
The research project at Howard University seeks to develop analytical and numerical capabilities to study heat transfer and fluid flow characteristics, and the prediction of the performance of solar heat receivers for space applications. Specifically, the study seeks to elucidate the effects of internal and external thermal radiation, geometrical and applicable dimensionless parameters on the overall heat transfer in space solar heat receivers. Over the last year, a procedure for the characterization of the state-of-charge (SOC) in solar heat receivers for space applications has been developed. By identifying the various factors that affect the SOC, a dimensional analysis is performed resulting in a number of dimensionless groups of parameters. Although not accomplished during the first phase of the research, data generated from a thermal simulation program can be used to determine values of the dimensionless parameters and the state-of-charge and thereby obtain a correlation for the SOC. The simulation program selected for the purpose is HOTTube, a thermal numerical computer code based on a transient time-explicit, axisymmetric model of the total solar heat receiver. Simulation results obtained with the computer program are presented the minimum and maximum insolation orbits. In the absence of any validation of the code with experimental data, results from HOTTube appear reasonable qualitatively in representing the physical situations modeled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matysiak, W; Yeung, D; Hsi, W
2014-06-01
Purpose: We present a study of dosimetric consequences on doses in water in modeling in-air proton fluence independently along principle axes for rotated elliptical spots. Methods: Phase-space parameters for modeling in-air fluence are the position sigma for the spatial distribution, the angle sigma for the angular distribution, and the correlation between position and angle distributions. Proton spots of the McLaren proton therapy system were measured at five locations near the isocenter for the energies of 180 MeV and 250 MeV. An elongated elliptical spot rotated with respect to the principle axes was observed for the 180 MeV, while a circular-likemore » spot was observed for the 250 MeV. In the first approach, the phase-space parameters were derived in the principle axes without rotation. In the second approach, the phase space parameters were derived in the reference frame with axes rotated to coincide with the major axes of the elliptical spot. Monte-Carlo simulations with derived phase-space parameters using both approaches to tally doses in water were performed and analyzed. Results: For the rotated elliptical 180 MeV spots, the position sigmas were 3.6 mm and 3.2 mm in principle axes, but were 4.3 mm and 2.0 mm when the reference frame was rotated. Measured spots fitted poorly the uncorrelated 2D Gaussian, but the quality of fit was significantly improved after the reference frame was rotated. As a Result, phase space parameters in the rotated frame were more appropriate for modeling in-air proton fluence of 180 MeV protons. Considerable differences were observed in Monte Carlo simulated dose distributions in water with phase-space parameters obtained with the two approaches. Conclusion: For rotated elliptical proton spots, phase-space parameters obtained in the rotated reference frame are better for modeling in-air proton fluence, and can be introduced into treatment planning systems.« less
Extending the modeling of the anisotropic galaxy power spectrum to k = 0.4 hMpc-1
NASA Astrophysics Data System (ADS)
Hand, Nick; Seljak, Uroš; Beutler, Florian; Vlah, Zvonimir
2017-10-01
We present a model for the redshift-space power spectrum of galaxies and demonstrate its accuracy in describing the monopole, quadrupole, and hexadecapole of the galaxy density field down to scales of k = 0.4 hMpc-1. The model describes the clustering of galaxies in the context of a halo model and the clustering of the underlying halos in redshift space using a combination of Eulerian perturbation theory and N-body simulations. The modeling of redshift-space distortions is done using the so-called distribution function approach. The final model has 13 free parameters, and each parameter is physically motivated rather than a nuisance parameter, which allows the use of well-motivated priors. We account for the Finger-of-God effect from centrals and both isolated and non-isolated satellites rather than using a single velocity dispersion to describe the combined effect. We test and validate the accuracy of the model on several sets of high-fidelity N-body simulations, as well as realistic mock catalogs designed to simulate the BOSS DR12 CMASS data set. The suite of simulations covers a range of cosmologies and galaxy bias models, providing a rigorous test of the level of theoretical systematics present in the model. The level of bias in the recovered values of f σ8 is found to be small. When including scales to k = 0.4 hMpc-1, we find 15-30% gains in the statistical precision of f σ8 relative to k = 0.2 hMpc-1 and a roughly 10-15% improvement for the perpendicular Alcock-Paczynski parameter α⊥. Using the BOSS DR12 CMASS mocks as a benchmark for comparison, we estimate an uncertainty on f σ8 that is ~10-20% larger than other similar Fourier-space RSD models in the literature that use k <= 0.2 hMpc-1, suggesting that these models likely have a too-limited parametrization.
Simulation of optimum parameters for GaN MSM UV photodetector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alhelfi, Mohanad A., E-mail: mhad12344@gmail.com; Ahmed, Naser M., E-mail: nas-tiji@yahoo.com; Hashim, M. R., E-mail: roslan@usm.my
2016-07-06
In this study the optimum parameters of GaN M-S-M photodetector are discussed. The evaluation of the photodetector depends on many parameters, the most of the important parameters the quality of the GaN film and others depend on the geometry of the interdigited electrode. In this simulation work using MATLAB software with consideration of the reflection and absorption on the metal contacts, a detailed study involving various electrode spacings (S) and widths (W) reveals conclusive results in device design. The optimum interelectrode design for interdigitated MSM-PD has been specified and evaluated by effect on quantum efficiency and responsivity.
Parameter estimating state reconstruction
NASA Technical Reports Server (NTRS)
George, E. B.
1976-01-01
Parameter estimation is considered for systems whose entire state cannot be measured. Linear observers are designed to recover the unmeasured states to a sufficient accuracy to permit the estimation process. There are three distinct dynamics that must be accommodated in the system design: the dynamics of the plant, the dynamics of the observer, and the system updating of the parameter estimation. The latter two are designed to minimize interaction of the involved systems. These techniques are extended to weakly nonlinear systems. The application to a simulation of a space shuttle POGO system test is of particular interest. A nonlinear simulation of the system is developed, observers designed, and the parameters estimated.
NASA Technical Reports Server (NTRS)
Howell, L. W.
2001-01-01
A simple power law model consisting of a single spectral index (alpha-1) is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at knee energy (E(sub k)) to a steeper spectral index alpha-2 > alpha-1 above E(sub k). The maximum likelihood procedure is developed for estimating these three spectral parameters of the broken power law energy spectrum from simulated detector responses. These estimates and their surrounding statistical uncertainty are being used to derive the requirements in energy resolution, calorimeter size, and energy response of a proposed sampling calorimeter for the Advanced Cosmic-ray Composition Experiment for the Space Station (ACCESS). This study thereby permits instrument developers to make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.
Nowke, Christian; Diaz-Pier, Sandra; Weyers, Benjamin; Hentschel, Bernd; Morrison, Abigail; Kuhlen, Torsten W.; Peyser, Alexander
2018-01-01
Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed. PMID:29937723
NASA Astrophysics Data System (ADS)
Benninghoff, Heike; Rems, Florian; Risse, Eicke; Brunner, Bernhard; Stelzer, Martin; Krenn, Rainer; Reiner, Matthias; Stangl, Christian; Gnat, Marcin
2018-01-01
In the framework of a project called on-orbit servicing end-to-end simulation, the final approach and capture of a tumbling client satellite in an on-orbit servicing mission are simulated. The necessary components are developed and the entire end-to-end chain is tested and verified. This involves both on-board and on-ground systems. The space segment comprises a passive client satellite, and an active service satellite with its rendezvous and berthing payload. The space segment is simulated using a software satellite simulator and two robotic, hardware-in-the-loop test beds, the European Proximity Operations Simulator (EPOS) 2.0 and the OOS-Sim. The ground segment is established as for a real servicing mission, such that realistic operations can be performed from the different consoles in the control room. During the simulation of the telerobotic operation, it is important to provide a realistic communication environment with different parameters like they occur in the real world (realistic delay and jitter, for example).
QUANTIFYING OBSERVATIONAL PROJECTION EFFECTS USING MOLECULAR CLOUD SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaumont, Christopher N.; Offner, Stella S.R.; Shetty, Rahul
2013-11-10
The physical properties of molecular clouds are often measured using spectral-line observations, which provide the only probes of the clouds' velocity structure. It is hard, though, to assess whether and to what extent intensity features in position-position-velocity (PPV) space correspond to 'real' density structures in position-position-position (PPP) space. In this paper, we create synthetic molecular cloud spectral-line maps of simulated molecular clouds, and present a new technique for measuring the reality of individual PPV structures. Using a dendrogram algorithm, we identify hierarchical structures in both PPP and PPV space. Our procedure projects density structures identified in PPP space into correspondingmore » intensity structures in PPV space and then measures the geometric overlap of the projected structures with structures identified from the synthetic observation. The fractional overlap between a PPP and PPV structure quantifies how well the synthetic observation recovers information about the three-dimensional structure. Applying this machinery to a set of synthetic observations of CO isotopes, we measure how well spectral-line measurements recover mass, size, velocity dispersion, and virial parameter for a simulated star-forming region. By disabling various steps of our analysis, we investigate how much opacity, chemistry, and gravity affect measurements of physical properties extracted from PPV cubes. For the simulations used here, which offer a decent, but not perfect, match to the properties of a star-forming region like Perseus, our results suggest that superposition induces a ∼40% uncertainty in masses, sizes, and velocity dispersions derived from {sup 13}CO (J = 1-0). As would be expected, superposition and confusion is worst in regions where the filling factor of emitting material is large. The virial parameter is most affected by superposition, such that estimates of the virial parameter derived from PPV and PPP information typically disagree by a factor of ∼2. This uncertainty makes it particularly difficult to judge whether gravitational or kinetic energy dominate a given region, since the majority of virial parameter measurements fall within a factor of two of the equipartition level α ∼ 2.« less
Neutrino oscillation parameter sampling with MonteCUBES
NASA Astrophysics Data System (ADS)
Blennow, Mattias; Fernandez-Martinez, Enrique
2010-01-01
We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].
NASA Technical Reports Server (NTRS)
Montgomery, Raymond C.; Ghosh, Dave; Kenny, Sean
1991-01-01
This paper presents results of analytic and simulation studies to determine the effectiveness of torque-wheel actuators in suppressing the vibrations of two-link telerobotic arms with attached payloads. The simulations use a planar generic model of a two-link arm with a torque wheel at the free end. Parameters of the arm model are selected to be representative of a large space-based robotic arm of the same class as the Space Shuttle Remote Manipulator, whereas parameters of the torque wheel are selected to be similar to those of the Mini-Mast facility at the Langley Research Center. Results show that this class of torque-wheel can produce an oscillation of 2.5 cm peak-to-peak in the end point of the arm and that the wheel produces significantly less overshoot when the arm is issued an abrupt stop command from the telerobotic input station.
Implementation of an Open-Scenario, Long-Term Space Debris Simulation Approach
NASA Technical Reports Server (NTRS)
Nelson, Bron; Yang Yang, Fan; Carlino, Roberto; Dono Perez, Andres; Faber, Nicolas; Henze, Chris; Karacalioglu, Arif Goktug; O'Toole, Conor; Swenson, Jason; Stupl, Jan
2015-01-01
This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance concept that diverts objects using photon pressure [9]. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps on the order of several days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions and orbital and physical parameters of the objects involved in close encounters (conjunctions). Furthermore, maneuvers take place on timescales much smaller than days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in Low Earth Orbit (LEO) and propagates all objects with high precision and variable time-steps as small as one second. It allows the assessment of the (potential) impact of physical or orbital changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space-track object catalog in LEO. We then use a high precision propagator to propagate all objects over the entire simulation duration. If collisions are detected, the appropriate number of debris objects are created and inserted into the simulation framework. Depending on the scenario, further objects, e.g. due to new launches, can be added. At the end of the simulation, the total number of objects above a cut-off size and the number of detected collisions provide benchmark parameters for the comparison between scenarios. The simulation approach is computationally intensive as it involves tens of thousands of objects; hence we use a highly parallel approach employing up to a thousand cores on the NASA Pleiades supercomputer for a single run. This paper describes our simulation approach, the status of its implementation, the approach to developing scenarios and examples of first test runs.
NASA Astrophysics Data System (ADS)
Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.
2011-12-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
NASA Astrophysics Data System (ADS)
Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.
2012-04-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
NASA Astrophysics Data System (ADS)
Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.
2012-03-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
Radiation environment study of near space in China area
NASA Astrophysics Data System (ADS)
Fan, Dongdong; Chen, Xingfeng; Li, Zhengqiang; Mei, Xiaodong
2015-10-01
Aerospace activity becomes research hotspot for worldwide aviation big countries. Solar radiation study is the prerequisite for aerospace activity to carry out, but lack of observation in near space layer becomes the barrier. Based on reanalysis data, input key parameters are determined and simulation experiments are tried separately to simulate downward solar radiation and ultraviolet radiation transfer process of near space in China area. Results show that atmospheric influence on the solar radiation and ultraviolet radiation transfer process has regional characteristic. As key factors such as ozone are affected by atmospheric action both on its density, horizontal and vertical distribution, meteorological data of stratosphere needs to been considered and near space in China area is divided by its activity feature. Simulated results show that solar and ultraviolet radiation is time, latitude and ozone density-variant and has complicated variation characteristics.
NASA Technical Reports Server (NTRS)
Srivastava, Priyaka; Kraus, Jeff; Murawski, Robert; Golden, Bertsel, Jr.
2015-01-01
NASAs Space Communications and Navigation (SCaN) program manages three active networks: the Near Earth Network, the Space Network, and the Deep Space Network. These networks simultaneously support NASA missions and provide communications services to customers worldwide. To efficiently manage these resources and their capabilities, a team of student interns at the NASA Glenn Research Center is developing a distributed system to model the SCaN networks. Once complete, the system shall provide a platform that enables users to perform capacity modeling of current and prospective missions with finer-grained control of information between several simulation and modeling tools. This will enable the SCaN program to access a holistic view of its networks and simulate the effects of modifications in order to provide NASA with decisional information. The development of this capacity modeling system is managed by NASAs Strategic Center for Education, Networking, Integration, and Communication (SCENIC). Three primary third-party software tools offer their unique abilities in different stages of the simulation process. MagicDraw provides UMLSysML modeling, AGIs Systems Tool Kit simulates the physical transmission parameters and de-conflicts scheduled communication, and Riverbed Modeler (formerly OPNET) simulates communication protocols and packet-based networking. SCENIC developers are building custom software extensions to integrate these components in an end-to-end space communications modeling platform. A central control module acts as the hub for report-based messaging between client wrappers. Backend databases provide information related to mission parameters and ground station configurations, while the end user defines scenario-specific attributes for the model. The eight SCENIC interns are working under the direction of their mentors to complete an initial version of this capacity modeling system during the summer of 2015. The intern team is composed of four students in Computer Science, two in Computer Engineering, one in Electrical Engineering, and one studying Space Systems Engineering.
Airborne Precision Spacing for Dependent Parallel Operations Interface Study
NASA Technical Reports Server (NTRS)
Volk, Paul M.; Takallu, M. A.; Hoffler, Keith D.; Weiser, Jarold; Turner, Dexter
2012-01-01
This paper describes a usability study of proposed cockpit interfaces to support Airborne Precision Spacing (APS) operations for aircraft performing dependent parallel approaches (DPA). NASA has proposed an airborne system called Pair Dependent Speed (PDS) which uses their Airborne Spacing for Terminal Arrival Routes (ASTAR) algorithm to manage spacing intervals. Interface elements were designed to facilitate the input of APS-DPA spacing parameters to ASTAR, and to convey PDS system information to the crew deemed necessary and/or helpful to conduct the operation, including: target speed, guidance mode, target aircraft depiction, and spacing trend indication. In the study, subject pilots observed recorded simulations using the proposed interface elements in which the ownship managed assigned spacing intervals from two other arriving aircraft. Simulations were recorded using the Aircraft Simulation for Traffic Operations Research (ASTOR) platform, a medium-fidelity simulator based on a modern Boeing commercial glass cockpit. Various combinations of the interface elements were presented to subject pilots, and feedback was collected via structured questionnaires. The results of subject pilot evaluations show that the proposed design elements were acceptable, and that preferable combinations exist within this set of elements. The results also point to potential improvements to be considered for implementation in future experiments.
Estimability of geodetic parameters from space VLBI observables
NASA Technical Reports Server (NTRS)
Adam, Jozsef
1990-01-01
The feasibility of space very long base interferometry (VLBI) observables for geodesy and geodynamics is investigated. A brief review of space VLBI systems from the point of view of potential geodetic application is given. A selected notational convention is used to jointly treat the VLBI observables of different types of baselines within a combined ground/space VLBI network. The basic equations of the space VLBI observables appropriate for convariance analysis are derived and included. The corresponding equations for the ground-to-ground baseline VLBI observables are also given for a comparison. The simplified expression of the mathematical models for both space VLBI observables (time delay and delay rate) include the ground station coordinates, the satellite orbital elements, the earth rotation parameters, the radio source coordinates, and clock parameters. The observation equations with these parameters were examined in order to determine which of them are separable or nonseparable. Singularity problems arising from coordinate system definition and critical configuration are studied. Linear dependencies between partials are analytically derived. The mathematical models for ground-space baseline VLBI observables were tested with simulation data in the frame of some numerical experiments. Singularity due to datum defect is confirmed.
Approximate Bayesian Computation by Subset Simulation using hierarchical state-space models
NASA Astrophysics Data System (ADS)
Vakilzadeh, Majid K.; Huang, Yong; Beck, James L.; Abrahamsson, Thomas
2017-02-01
A new multi-level Markov Chain Monte Carlo algorithm for Approximate Bayesian Computation, ABC-SubSim, has recently appeared that exploits the Subset Simulation method for efficient rare-event simulation. ABC-SubSim adaptively creates a nested decreasing sequence of data-approximating regions in the output space that correspond to increasingly closer approximations of the observed output vector in this output space. At each level, multiple samples of the model parameter vector are generated by a component-wise Metropolis algorithm so that the predicted output corresponding to each parameter value falls in the current data-approximating region. Theoretically, if continued to the limit, the sequence of data-approximating regions would converge on to the observed output vector and the approximate posterior distributions, which are conditional on the data-approximation region, would become exact, but this is not practically feasible. In this paper we study the performance of the ABC-SubSim algorithm for Bayesian updating of the parameters of dynamical systems using a general hierarchical state-space model. We note that the ABC methodology gives an approximate posterior distribution that actually corresponds to an exact posterior where a uniformly distributed combined measurement and modeling error is added. We also note that ABC algorithms have a problem with learning the uncertain error variances in a stochastic state-space model and so we treat them as nuisance parameters and analytically integrate them out of the posterior distribution. In addition, the statistical efficiency of the original ABC-SubSim algorithm is improved by developing a novel strategy to regulate the proposal variance for the component-wise Metropolis algorithm at each level. We demonstrate that Self-regulated ABC-SubSim is well suited for Bayesian system identification by first applying it successfully to model updating of a two degree-of-freedom linear structure for three cases: globally, locally and un-identifiable model classes, and then to model updating of a two degree-of-freedom nonlinear structure with Duffing nonlinearities in its interstory force-deflection relationship.
Demonstration of a 3D vision algorithm for space applications
NASA Technical Reports Server (NTRS)
Defigueiredo, Rui J. P. (Editor)
1987-01-01
This paper reports an extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using movement invariants as features of object representation. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.
Fractional Transport in Strongly Turbulent Plasmas.
Isliker, Heinz; Vlahos, Loukas; Constantinescu, Dana
2017-07-28
We analyze statistically the energization of particles in a large scale environment of strong turbulence that is fragmented into a large number of distributed current filaments. The turbulent environment is generated through strongly perturbed, 3D, resistive magnetohydrodynamics simulations, and it emerges naturally from the nonlinear evolution, without a specific reconnection geometry being set up. Based on test-particle simulations, we estimate the transport coefficients in energy space for use in the classical Fokker-Planck (FP) equation, and we show that the latter fails to reproduce the simulation results. The reason is that transport in energy space is highly anomalous (strange), the particles perform Levy flights, and the energy distributions show extended power-law tails. Newly then, we motivate the use and derive the specific form of a fractional transport equation (FTE), we determine its parameters and the order of the fractional derivatives from the simulation data, and we show that the FTE is able to reproduce the high energy part of the simulation data very well. The procedure for determining the FTE parameters also makes clear that it is the analysis of the simulation data that allows us to make the decision whether a classical FP equation or a FTE is appropriate.
Fractional Transport in Strongly Turbulent Plasmas
NASA Astrophysics Data System (ADS)
Isliker, Heinz; Vlahos, Loukas; Constantinescu, Dana
2017-07-01
We analyze statistically the energization of particles in a large scale environment of strong turbulence that is fragmented into a large number of distributed current filaments. The turbulent environment is generated through strongly perturbed, 3D, resistive magnetohydrodynamics simulations, and it emerges naturally from the nonlinear evolution, without a specific reconnection geometry being set up. Based on test-particle simulations, we estimate the transport coefficients in energy space for use in the classical Fokker-Planck (FP) equation, and we show that the latter fails to reproduce the simulation results. The reason is that transport in energy space is highly anomalous (strange), the particles perform Levy flights, and the energy distributions show extended power-law tails. Newly then, we motivate the use and derive the specific form of a fractional transport equation (FTE), we determine its parameters and the order of the fractional derivatives from the simulation data, and we show that the FTE is able to reproduce the high energy part of the simulation data very well. The procedure for determining the FTE parameters also makes clear that it is the analysis of the simulation data that allows us to make the decision whether a classical FP equation or a FTE is appropriate.
Sun, Xiaodian; Jin, Li; Xiong, Momiao
2008-01-01
It is system dynamics that determines the function of cells, tissues and organisms. To develop mathematical models and estimate their parameters are an essential issue for studying dynamic behaviors of biological systems which include metabolic networks, genetic regulatory networks and signal transduction pathways, under perturbation of external stimuli. In general, biological dynamic systems are partially observed. Therefore, a natural way to model dynamic biological systems is to employ nonlinear state-space equations. Although statistical methods for parameter estimation of linear models in biological dynamic systems have been developed intensively in the recent years, the estimation of both states and parameters of nonlinear dynamic systems remains a challenging task. In this report, we apply extended Kalman Filter (EKF) to the estimation of both states and parameters of nonlinear state-space models. To evaluate the performance of the EKF for parameter estimation, we apply the EKF to a simulation dataset and two real datasets: JAK-STAT signal transduction pathway and Ras/Raf/MEK/ERK signaling transduction pathways datasets. The preliminary results show that EKF can accurately estimate the parameters and predict states in nonlinear state-space equations for modeling dynamic biochemical networks. PMID:19018286
Systematic parameter inference in stochastic mesoscopic modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Li, Zhen
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the priormore » knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.« less
NASA Technical Reports Server (NTRS)
Kibler, J. F.; Suttles, J. T.
1977-01-01
One way to obtain estimates of the unknown parameters in a pollution dispersion model is to compare the model predictions with remotely sensed air quality data. A ground-based LIDAR sensor provides relative pollution concentration measurements as a function of space and time. The measured sensor data are compared with the dispersion model output through a numerical estimation procedure to yield parameter estimates which best fit the data. This overall process is tested in a computer simulation to study the effects of various measurement strategies. Such a simulation is useful prior to a field measurement exercise to maximize the information content in the collected data. Parametric studies of simulated data matched to a Gaussian plume dispersion model indicate the trade offs available between estimation accuracy and data acquisition strategy.
Sensitivity analysis of the space shuttle to ascent wind profiles
NASA Technical Reports Server (NTRS)
Smith, O. E.; Austin, L. D., Jr.
1982-01-01
A parametric sensitivity analysis of the space shuttle ascent flight to the wind profile is presented. Engineering systems parameters are obtained by flight simulations using wind profile models and samples of detailed (Jimsphere) wind profile measurements. The wind models used are the synthetic vector wind model, with and without the design gust, and a model of the vector wind change with respect to time. From these comparison analyses an insight is gained on the contribution of winds to ascent subsystems flight parameters.
Time-delayed chameleon: Analysis, synchronization and FPGA implementation
NASA Astrophysics Data System (ADS)
Rajagopal, Karthikeyan; Jafari, Sajad; Laarem, Guessas
2017-12-01
In this paper we report a time-delayed chameleon-like chaotic system which can belong to different families of chaotic attractors depending on the choices of parameters. Such a characteristic of self-excited and hidden chaotic flows in a simple 3D system with time delay has not been reported earlier. Dynamic analysis of the proposed time-delayed systems are analysed in time-delay space and parameter space. A novel adaptive modified functional projective lag synchronization algorithm is derived for synchronizing identical time-delayed chameleon systems with uncertain parameters. The proposed time-delayed systems and the synchronization algorithm with controllers and parameter estimates are then implemented in FPGA using hardware-software co-simulation and the results are presented.
Evolution of axis ratios from phase space dynamics of triaxial collapse
NASA Astrophysics Data System (ADS)
Nadkarni-Ghosh, Sharvari; Arya, Bhaskar
2018-04-01
We investigate the evolution of axis ratios of triaxial haloes using the phase space description of triaxial collapse. In this formulation, the evolution of the triaxial ellipsoid is described in terms of the dynamics of eigenvalues of three important tensors: the Hessian of the gravitational potential, the tensor of velocity derivatives, and the deformation tensor. The eigenvalues of the deformation tensor are directly related to the parameters that describe triaxiality, namely, the minor-to-major and intermediate-to-major axes ratios (s and q) and the triaxiality parameter T. Using the phase space equations, we evolve the eigenvalues and examine the evolution of the probability distribution function (PDF) of the axes ratios as a function of mass scale and redshift for Gaussian initial conditions. We find that the ellipticity and prolateness increase with decreasing mass scale and decreasing redshift. These trends agree with previous analytic studies but differ from numerical simulations. However, the PDF of the scaled parameter {\\tilde{q}} = (q-s)/(1-s) follows a universal distribution over two decades in mass range and redshifts which is in qualitative agreement with the universality for conditional PDF reported in simulations. We further show using the phase space dynamics that, in fact, {\\tilde{q}} is a phase space invariant and is conserved individually for each halo. These results demonstrate that the phase space analysis is a useful tool that provides a different perspective on the evolution of perturbations and can be applied to more sophisticated models in the future.
An Integrated Approach to Parameter Learning in Infinite-Dimensional Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, Zachary M.; Wendelberger, Joanne Roth
The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations,more » high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the desired parameter set.« less
Optimal design of wind barriers using 3D computational fluid dynamics simulations
NASA Astrophysics Data System (ADS)
Fang, H.; Wu, X.; Yang, X.
2017-12-01
Desertification is a significant global environmental and ecological problem that requires human-regulated control and management. Wind barriers are commonly used to reduce wind velocity or trap drifting sand in arid or semi-arid areas. Therefore, optimal design of wind barriers becomes critical in Aeolian engineering. In the current study, we perform 3D computational fluid dynamics (CFD) simulations for flow passing through wind barriers with different structural parameters. To validate the simulation results, we first inter-compare the simulated flow field results with those from both wind-tunnel experiments and field measurements. Quantitative analyses of the shelter effect are then conducted based on a series of simulations with different structural parameters (such as wind barrier porosity, row numbers, inter-row spacing and belt schemes). The results show that wind barriers with porosity of 0.35 could provide the longest shelter distance (i.e., where the wind velocity reduction is more than 50%) thus are recommended in engineering designs. To determine the optimal row number and belt scheme, we introduce a cost function that takes both wind-velocity reduction effects and economical expense into account. The calculated cost function show that a 3-row-belt scheme with inter-row spacing of 6h (h as the height of wind barriers) and inter-belt spacing of 12h is the most effective.
NASA Technical Reports Server (NTRS)
Hartley, Craig S.
1990-01-01
To augment the capabilities of the Space Transportation System, NASA has funded studies and developed programs aimed at developing reusable, remotely piloted spacecraft and satellite servicing systems capable of delivering, retrieving, and servicing payloads at altitudes and inclinations beyond the reach of the present Shuttle Orbiters. Since the mid 1970's, researchers at the Martin Marietta Astronautics Group Space Operations Simulation (SOS) Laboratory have been engaged in investigations of remotely piloted and supervised autonomous spacecraft operations. These investigations were based on high fidelity, real-time simulations and have covered a wide range of human factors issues related to controllability. Among these are: (1) mission conditions, including thruster plume impingements and signal time delays; (2) vehicle performance variables, including control authority, control harmony, minimum impulse, and cross coupling of accelerations; (3) maneuvering task requirements such as target distance and dynamics; (4) control parameters including various control modes and rate/displacement deadbands; and (5) display parameters involving camera placement and function, visual aids, and presentation of operational feedback from the spacecraft. This presentation includes a brief description of the capabilities of the SOS Lab to simulate real-time free-flyer operations using live video, advanced technology ground and on-orbit workstations, and sophisticated computer models of on-orbit spacecraft behavior. Sample results from human factors studies in the five categories cited above are provided.
NASA Astrophysics Data System (ADS)
Saleem, M.; Resmi, L.; Misra, Kuntal; Pai, Archana; Arun, K. G.
2018-03-01
Short duration Gamma Ray Bursts (SGRB) and their afterglows are among the most promising electromagnetic (EM) counterparts of Neutron Star (NS) mergers. The afterglow emission is broad-band, visible across the entire electromagnetic window from γ-ray to radio frequencies. The flux evolution in these frequencies is sensitive to the multidimensional afterglow physical parameter space. Observations of gravitational wave (GW) from BNS mergers in spatial and temporal coincidence with SGRB and associated afterglows can provide valuable constraints on afterglow physics. We run simulations of GW-detected BNS events and assuming that all of them are associated with a GRB jet which also produces an afterglow, investigate how detections or non-detections in X-ray, optical and radio frequencies can be influenced by the parameter space. We narrow down the regions of afterglow parameter space for a uniform top-hat jet model, which would result in different detection scenarios. We list inferences which can be drawn on the physics of GRB afterglows from multimessenger astronomy with coincident GW-EM observations.
NASA Astrophysics Data System (ADS)
Dang, Van Tuan; Lafon, Pascal; Labergere, Carl
2017-10-01
In this work, a combination of Proper Orthogonal Decomposition (POD) and Radial Basis Function (RBF) is proposed to build a surrogate model based on the Benchmark Springback 3D bending from the Numisheet2011 congress. The influence of the two design parameters, the geometrical parameter of the die radius and the process parameter of the blank holder force, on the springback of the sheet after a stamping operation is analyzed. The classical Design of Experience (DoE) uses Full Factorial to design the parameter space with sample points as input data for finite element method (FEM) numerical simulation of the sheet metal stamping process. The basic idea is to consider the design parameters as additional dimensions for the solution of the displacement fields. The order of the resultant high-fidelity model is reduced through the use of POD method which performs model space reduction and results in the basis functions of the low order model. Specifically, the snapshot method is used in our work, in which the basis functions is derived from snapshot deviation of the matrix of the final displacements fields of the FEM numerical simulation. The obtained basis functions are then used to determine the POD coefficients and RBF is used for the interpolation of these POD coefficients over the parameter space. Finally, the presented POD-RBF approach which is used for shape optimization can be performed with high accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Sang Beom; Dsilva, Carmeline J.; Debenedetti, Pablo G., E-mail: pdebene@princeton.edu
Understanding the mechanisms by which proteins fold from disordered amino-acid chains to spatially ordered structures remains an area of active inquiry. Molecular simulations can provide atomistic details of the folding dynamics which complement experimental findings. Conventional order parameters, such as root-mean-square deviation and radius of gyration, provide structural information but fail to capture the underlying dynamics of the protein folding process. It is therefore advantageous to adopt a method that can systematically analyze simulation data to extract relevant structural as well as dynamical information. The nonlinear dimensionality reduction technique known as diffusion maps automatically embeds the high-dimensional folding trajectories inmore » a lower-dimensional space from which one can more easily visualize folding pathways, assuming the data lie approximately on a lower-dimensional manifold. The eigenvectors that parametrize the low-dimensional space, furthermore, are determined systematically, rather than chosen heuristically, as is done with phenomenological order parameters. We demonstrate that diffusion maps can effectively characterize the folding process of a Trp-cage miniprotein. By embedding molecular dynamics simulation trajectories of Trp-cage folding in diffusion maps space, we identify two folding pathways and intermediate structures that are consistent with the previous studies, demonstrating that this technique can be employed as an effective way of analyzing and constructing protein folding pathways from molecular simulations.« less
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen
2018-07-01
Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.
Wu, Yao; Dai, Xiaodong; Huang, Niu; Zhao, Lifeng
2013-06-05
In force field parameter development using ab initio potential energy surfaces (PES) as target data, an important but often neglected matter is the lack of a weighting scheme with optimal discrimination power to fit the target data. Here, we developed a novel partition function-based weighting scheme, which not only fits the target potential energies exponentially like the general Boltzmann weighting method, but also reduces the effect of fitting errors leading to overfitting. The van der Waals (vdW) parameters of benzene and propane were reparameterized by using the new weighting scheme to fit the high-level ab initio PESs probed by a water molecule in global configurational space. The molecular simulation results indicate that the newly derived parameters are capable of reproducing experimental properties in a broader range of temperatures, which supports the partition function-based weighting scheme. Our simulation results also suggest that structural properties are more sensitive to vdW parameters than partial atomic charge parameters in these systems although the electrostatic interactions are still important in energetic properties. As no prerequisite conditions are required, the partition function-based weighting method may be applied in developing any types of force field parameters. Copyright © 2013 Wiley Periodicals, Inc.
Minerva exoplanet detection sensitivity from simulated observations
NASA Astrophysics Data System (ADS)
McCrady, Nate; Nava, C.
2014-01-01
Small rocky planets induce radial velocity signals that are difficult to detect in the presence of stellar noise sources of comparable or larger amplitude. Minerva is a dedicated, robotic observatory that will attain 1 meter per second precision to detect these rocky planets in the habitable zone around nearby stars. We present results of an ongoing project investigating Minerva’s planet detection sensitivity as a function of observational cadence, planet mass, and orbital parameters (period, eccentricity, and argument of periastron). Radial velocity data is simulated with realistic observing cadence, accounting for weather patterns at Mt. Hopkins, Arizona. Instrumental and stellar noise are added to the simulated observations, including effects of oscillation, jitter, starspots and rotation. We extract orbital parameters from the simulated RV data using the RVLIN code. A Monte Carlo analysis is used to explore the parameter space and evaluate planet detection completeness. Our results will inform the Minerva observing strategy by providing a quantitative measure of planet detection sensitivity as a function of orbital parameters and cadence.
Parametric behaviors of CLUBB in simulations of low clouds in the Community Atmosphere Model (CAM)
Guo, Zhun; Wang, Minghuai; Qian, Yun; ...
2015-07-03
In this study, we investigate the sensitivity of simulated low clouds to 14 selected tunable parameters of Cloud Layers Unified By Binormals (CLUBB), a higher order closure (HOC) scheme, and 4 parameters of the Zhang-McFarlane (ZM) deep convection scheme in the Community Atmosphere Model version 5 (CAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is applied to study the responses of simulated cloud fields to tunable parameters. Our results show that the variance in simulated low-cloud properties (cloud fraction and liquid water path) can be explained bymore » the selected tunable parameters in two different ways: macrophysics itself and its interaction with microphysics. First, the parameters related to dynamic and thermodynamic turbulent structure and double Gaussians closure are found to be the most influential parameters for simulating low clouds. The spatial distributions of the parameter contributions show clear cloud-regime dependence. Second, because of the coupling between cloud macrophysics and cloud microphysics, the coefficient of the dissipation term in the total water variance equation is influential. This parameter affects the variance of in-cloud cloud water, which further influences microphysical process rates, such as autoconversion, and eventually low-cloud fraction. Furthermore, this study improves understanding of HOC behavior associated with parameter uncertainties and provides valuable insights for the interaction of macrophysics and microphysics.« less
Numerical simulation of the geodynamo reaches Earth's core dynamical regime
NASA Astrophysics Data System (ADS)
Aubert, J.; Gastine, T.; Fournier, A.
2016-12-01
Numerical simulations of the geodynamo have been successful at reproducing a number of static (field morphology) and kinematic (secular variation patterns, core surface flows and westward drift) features of Earth's magnetic field, making them a tool of choice for the analysis and retrieval of geophysical information on Earth's core. However, classical numerical models have been run in a parameter regime far from that of the real system, prompting the question of whether we do get "the right answers for the wrong reasons", i.e. whether the agreement between models and nature simply occurs by chance and without physical relevance in the dynamics. In this presentation, we show that classical models succeed in describing the geodynamo because their large-scale spatial structure is essentially invariant as one progresses along a well-chosen path in parameter space to Earth's core conditions. This path is constrained by the need to enforce the relevant force balance (MAC or Magneto-Archimedes-Coriolis) and preserve the ratio of the convective overturn and magnetic diffusion times. Numerical simulations performed along this path are shown to be spatially invariant at scales larger than that where the magnetic energy is ohmically dissipated. This property enables the definition of large-eddy simulations that show good agreement with direct numerical simulations in the range where both are feasible, and that can be computed at unprecedented values of the control parameters, such as an Ekman number E=10-8. Combining direct and large-eddy simulations, large-scale invariance is observed over half the logarithmic distance in parameter space between classical models and Earth. The conditions reached at this mid-point of the path are furthermore shown to be representative of the rapidly-rotating, asymptotic dynamical regime in which Earth's core resides, with a MAC force balance undisturbed by viscosity or inertia, the enforcement of a Taylor state and strong-field dynamo action. We conclude that numerical modelling has advanced to a stage where it is possible to use models correctly representing the statics, kinematics and now the dynamics of the geodynamo. This opens the way to a better analysis of the geomagnetic field in the time and space domains.
Caruso, Geoffrey; Cavailhès, Jean; Peeters, Dominique; Thomas, Isabelle; Frankhauser, Pierre; Vuidel, Gilles
2015-01-01
This paper describes a dataset of 6284 land transactions prices and plot surfaces in 3 medium-sized cities in France (Besançon, Dijon and Brest). The dataset includes road accessibility as obtained from a minimization algorithm, and the amount of green space available to households in the neighborhood of the transactions, as evaluated from a land cover dataset. Further to the data presentation, the paper describes how these variables can be used to estimate the non-observable parameters of a residential choice function explicitly derived from a microeconomic model. The estimates are used by Caruso et al. (2015) to run a calibrated microeconomic urban growth simulation model where households are assumed to trade-off accessibility and local green space amenities. PMID:26958606
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
This manual describes how to use the Emulation Simulation Computer Model (ESCM). Based on G189A, ESCM computes the transient performance of a Space Station atmospheric revitalization subsystem (ARS) with CO2 removal provided by a solid amine water desorbed subsystem called SAWD. Many performance parameters are computed some of which are cabin CO2 partial pressure, relative humidity, temperature, O2 partial pressure, and dew point. The program allows the user to simulate various possible combinations of man loading, metabolic profiles, cabin volumes and certain hypothesized failures that could occur.
Experimental simulation of space plasma interactions with high voltage solar arrays
NASA Technical Reports Server (NTRS)
Stillwell, R. P.; Kaufman, H. R.; Robinson, R. S.
1981-01-01
Operating high voltage solar arrays in the space environment can result in anomalously large currents being collected through small insulation defects. Tests of simulated defects have been conducted in a 45-cm vacuum chamber with plasma densities of 100,000 to 1,000,000/cu cm. Plasmas were generated using an argon hollow cathode. The solar array elements were simulated by placing a thin sheet of polyimide (Kapton) insulation with a small hole in it over a conductor. Parameters tested were: hole size, adhesive, surface roughening, sample temperature, insulator thickness, insulator area. These results are discussed along with some preliminary empirical correlations.
NASA Astrophysics Data System (ADS)
Guerrero, J.; Halldin, S.; Xu, C.; Lundin, L.
2011-12-01
Distributed hydrological models are important tools in water management as they account for the spatial variability of the hydrological data, as well as being able to produce spatially distributed outputs. They can directly incorporate and assess potential changes in the characteristics of our basins. A recognized problem for models in general is equifinality, which is only exacerbated for distributed models who tend to have a large number of parameters. We need to deal with the fundamentally ill-posed nature of the problem that such models force us to face, i.e. a large number of parameters and very few variables that can be used to constrain them, often only the catchment discharge. There is a growing but yet limited literature showing how the internal states of a distributed model can be used to calibrate/validate its predictions. In this paper, a distributed version of WASMOD, a conceptual rainfall runoff model with only three parameters, combined with a routing algorithm based on the high-resolution HydroSHEDS data was used to simulate the discharge in the Paso La Ceiba basin in Honduras. The parameter space was explored using Monte-Carlo simulations and the region of space containing the parameter-sets that were considered behavioral according to two different criteria was delimited using the geometric concept of alpha-shapes. The discharge data from five internal sub-basins was used to aid in the calibration of the model and to answer the following questions: Can this information improve the simulations at the outlet of the catchment, or decrease their uncertainty? Also, after reducing the number of model parameters needing calibration through sensitivity analysis: Is it possible to relate them to basin characteristics? The analysis revealed that in most cases the internal discharge data can be used to reduce the uncertainty in the discharge at the outlet, albeit with little improvement in the overall simulation results.
Predicting Instability Timescales in Closely-Packed Planetary Systems
NASA Astrophysics Data System (ADS)
Tamayo, Daniel; Hadden, Samuel; Hussain, Naireen; Silburt, Ari; Gilbertson, Christian; Rein, Hanno; Menou, Kristen
2018-04-01
Many of the multi-planet systems discovered around other stars are maximally packed. This implies that simulations with masses or orbital parameters too far from the actual values will destabilize on short timescales; thus, long-term dynamics allows one to constrain the orbital architectures of many closely packed multi-planet systems. A central challenge in such efforts is the large computational cost of N-body simulations, which preclude a full survey of the high-dimensional parameter space of orbital architectures allowed by observations. I will present our recent successes in training machine learning models capable of reliably predicting orbital stability a million times faster than N-body simulations. By engineering dynamically relevant features that we feed to a gradient-boosted decision tree algorithm (XGBoost), we are able to achieve a precision and recall of 90% on a holdout test set of N-body simulations. This opens a wide discovery space for characterizing new exoplanet discoveries and for elucidating how orbital architectures evolve through time as the next generation of spaceborne exoplanet surveys prepare for launch this year.
A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components
NASA Technical Reports Server (NTRS)
Abernethy, K.
1986-01-01
The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.
Key parameters design of an aerial target detection system on a space-based platform
NASA Astrophysics Data System (ADS)
Zhu, Hanlu; Li, Yejin; Hu, Tingliang; Rao, Peng
2018-02-01
To ensure flight safety of an aerial aircraft and avoid recurrence of aircraft collisions, a method of multi-information fusion is proposed to design the key parameter to realize aircraft target detection on a space-based platform. The key parameters of a detection wave band and spatial resolution using the target-background absolute contrast, target-background relative contrast, and signal-to-clutter ratio were determined. This study also presented the signal-to-interference ratio for analyzing system performance. Key parameters are obtained through the simulation of a specific aircraft. And the simulation results show that the boundary ground sampling distance is 30 and 35 m in the mid- wavelength infrared (MWIR) and long-wavelength infrared (LWIR) bands for most aircraft detection, and the most reasonable detection wavebands is 3.4 to 4.2 μm and 4.35 to 4.5 μm in the MWIR bands, and 9.2 to 9.8 μm in the LWIR bands. We also found that the direction of detection has a great impact on the detection efficiency, especially in MWIR bands.
NASA Technical Reports Server (NTRS)
Fletcher, Lauren E.; Aldridge, Ann M.; Wheelwright, Charles; Maida, James
1997-01-01
Task illumination has a major impact on human performance: What a person can perceive in his environment significantly affects his ability to perform tasks, especially in space's harsh environment. Training for lighting conditions in space has long depended on physical models and simulations to emulate the effect of lighting, but such tests are expensive and time-consuming. To evaluate lighting conditions not easily simulated on Earth, personnel at NASA Johnson Space Center's (JSC) Graphics Research and Analysis Facility (GRAF) have been developing computerized simulations of various illumination conditions using the ray-tracing program, Radiance, developed by Greg Ward at Lawrence Berkeley Laboratory. Because these computer simulations are only as accurate as the data used, accurate information about the reflectance properties of materials and light distributions is needed. JSC's Lighting Environment Test Facility (LETF) personnel gathered material reflectance properties for a large number of paints, metals, and cloths used in the Space Shuttle and Space Station programs, and processed these data into reflectance parameters needed for the computer simulations. They also gathered lamp distribution data for most of the light sources used, and validated the ability to accurately simulate lighting levels by comparing predictions with measurements for several ground-based tests. The result of this study is a database of material reflectance properties for a wide variety of materials, and lighting information for most of the standard light sources used in the Shuttle/Station programs. The combination of the Radiance program and GRAF's graphics capability form a validated computerized lighting simulation capability for NASA.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
An IBM PC-based math model for space station solar array simulation
NASA Technical Reports Server (NTRS)
Emanuel, E. M.
1986-01-01
This report discusses and documents the design, development, and verification of a microcomputer-based solar cell math model for simulating the Space Station's solar array Initial Operational Capability (IOC) reference configuration. The array model is developed utilizing a linear solar cell dc math model requiring only five input parameters: short circuit current, open circuit voltage, maximum power voltage, maximum power current, and orbit inclination. The accuracy of this model is investigated using actual solar array on orbit electrical data derived from the Solar Array Flight Experiment/Dynamic Augmentation Experiment (SAFE/DAE), conducted during the STS-41D mission. This simulator provides real-time simulated performance data during the steady state portion of the Space Station orbit (i.e., array fully exposed to sunlight). Eclipse to sunlight transients and shadowing effects are not included in the analysis, but are discussed briefly. Integrating the Solar Array Simulator (SAS) into the Power Management and Distribution (PMAD) subsystem is also discussed.
Comparing a discrete and continuum model of the intestinal crypt
Murray, Philip J.; Walter, Alex; Fletcher, Alex G.; Edwards, Carina M.; Tindall, Marcus J.; Maini, Philip K.
2011-01-01
The integration of processes at different scales is a key problem in the modelling of cell populations. Owing to increased computational resources and the accumulation of data at the cellular and subcellular scales, the use of discrete, cell-level models, which are typically solved using numerical simulations, has become prominent. One of the merits of this approach is that important biological factors, such as cell heterogeneity and noise, can be easily incorporated. However, it can be difficult to efficiently draw generalisations from the simulation results, as, often, many simulation runs are required to investigate model behaviour in typically large parameter spaces. In some cases, discrete cell-level models can be coarse-grained, yielding continuum models whose analysis can lead to the development of insight into the underlying simulations. In this paper we apply such an approach to the case of a discrete model of cell dynamics in the intestinal crypt. An analysis of the resulting continuum model demonstrates that there is a limited region of parameter space within which steady-state (and hence biologically realistic) solutions exist. Continuum model predictions show good agreement with corresponding results from the underlying simulations and experimental data taken from murine intestinal crypts. PMID:21411869
NASA Technical Reports Server (NTRS)
Jennings, Esther H.; Nguyen, Sam P.; Wang, Shin-Ywan; Woo, Simon S.
2008-01-01
NASA's planned Lunar missions will involve multiple NASA centers where each participating center has a specific role and specialization. In this vision, the Constellation program (CxP)'s Distributed System Integration Laboratories (DSIL) architecture consist of multiple System Integration Labs (SILs), with simulators, emulators, testlabs and control centers interacting with each other over a broadband network to perform test and verification for mission scenarios. To support the end-to-end simulation and emulation effort of NASA' exploration initiatives, different NASA centers are interconnected to participate in distributed simulations. Currently, DSIL has interconnections among the following NASA centers: Johnson Space Center (JSC), Kennedy Space Center (KSC), Marshall Space Flight Center (MSFC) and Jet Propulsion Laboratory (JPL). Through interconnections and interactions among different NASA centers, critical resources and data can be shared, while independent simulations can be performed simultaneously at different NASA locations, to effectively utilize the simulation and emulation capabilities at each center. Furthermore, the development of DSIL can maximally leverage the existing project simulation and testing plans. In this work, we describe the specific role and development activities at JPL for Space Communications and Navigation Network (SCaN) simulator using the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) tool to simulate communications effects among mission assets. Using MACHETE, different space network configurations among spacecrafts and ground systems of various parameter sets can be simulated. Data that is necessary for tracking, navigation, and guidance of spacecrafts such as Crew Exploration Vehicle (CEV), Crew Launch Vehicle (CLV), and Lunar Relay Satellite (LRS) and orbit calculation data are disseminated to different NASA centers and updated periodically using the High Level Architecture (HLA). In addition, the performance of DSIL under different traffic loads with different mix of data and priorities are evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson M.; Feng, Zhe; Burleyson, Casey D.
Regional cloud permitting model simulations of cloud populations observed during the 2011 ARM Madden Julian Oscillation Investigation Experiment/ Dynamics of Madden-Julian Experiment (AMIE/DYNAMO) field campaign are evaluated against radar and ship-based measurements. Sensitivity of model simulated surface rain rate statistics to parameters and parameterization of hydrometeor sizes in five commonly used WRF microphysics schemes are examined. It is shown that at 2 km grid spacing, the model generally overestimates rain rate from large and deep convective cores. Sensitivity runs involving variation of parameters that affect rain drop or ice particle size distribution (more aggressive break-up process etc) generally reduce themore » bias in rain-rate and boundary layer temperature statistics as the smaller particles become more vulnerable to evaporation. Furthermore significant improvement in the convective rain-rate statistics is observed when the horizontal grid-spacing is reduced to 1 km and 0.5 km, while it is worsened when run at 4 km grid spacing as increased turbulence enhances evaporation. The results suggest modulation of evaporation processes, through parameterization of turbulent mixing and break-up of hydrometeors may provide a potential avenue for correcting cloud statistics and associated boundary layer temperature biases in regional and global cloud permitting model simulations.« less
NASA Astrophysics Data System (ADS)
Tomita, Motohiro; Ogasawara, Masataka; Terada, Takuya; Watanabe, Takanobu
2018-04-01
We provide the parameters of Stillinger-Weber potentials for GeSiSn ternary mixed systems. These parameters can be used in molecular dynamics (MD) simulations to reproduce phonon properties and thermal conductivities. The phonon dispersion relation is derived from the dynamical structure factor, which is calculated by the space-time Fourier transform of atomic trajectories in an MD simulation. The phonon properties and thermal conductivities of GeSiSn ternary crystals calculated using these parameters mostly reproduced both the findings of previous experiments and earlier calculations made using MD simulations. The atomic composition dependence of these properties in GeSiSn ternary crystals obtained by previous studies (both experimental and theoretical) and the calculated data were almost exactly reproduced by our proposed parameters. Moreover, the results of the MD simulation agree with the previous calculations made using a time-independent phonon Boltzmann transport equation with complicated scattering mechanisms. These scattering mechanisms are very important in complicated nanostructures, as they allow the heat-transfer properties to be more accurately calculated by MD simulations. This work enables us to predict the phonon- and heat-related properties of bulk group IV alloys, especially ternary alloys.
Improving parallel I/O autotuning with performance modeling
Behzad, Babak; Byna, Surendra; Wild, Stefan M.; ...
2014-01-01
Various layers of the parallel I/O subsystem offer tunable parameters for improving I/O performance on large-scale computers. However, searching through a large parameter space is challenging. We are working towards an autotuning framework for determining the parallel I/O parameters that can achieve good I/O performance for different data write patterns. In this paper, we characterize parallel I/O and discuss the development of predictive models for use in effectively reducing the parameter space. Furthermore, applying our technique on tuning an I/O kernel derived from a large-scale simulation code shows that the search time can be reduced from 12 hours to 2more » hours, while achieving 54X I/O performance speedup.« less
NASA Technical Reports Server (NTRS)
Allen, R. W.; Jex, H. R.
1973-01-01
In order to test various components of a regenerative life support system and to obtain data on the physiological and psychological effects of long duration exposure to confinement in a space station atmosphere, four carefully screened young men were sealed in a space station simulator for 90 days and administered a tracking test battery. The battery included a clinical test (Critical Instability Task) designed to measure a subject's dynamic time delay, and a more conventional steady tracking task, during which dynamic response (describing functions) and performance measures were obtained. Good correlation was noted between the clinical critical instability scores and more detailed tracking parameters such as dynamic time delay and gain-crossover frequency. The levels of each parameter span the range observed with professional pilots and astronaut candidates tested previously. The chamber environment caused no significant decrement on the average crewman's dynamic response behavior, and the subjects continued to improve slightly in their tracking skills during the 90-day confinement period.
Extending the modeling of the anisotropic galaxy power spectrum to k = 0.4 h Mpc{sup −1}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hand, Nick; Seljak, Uroš; Beutler, Florian
We present a model for the redshift-space power spectrum of galaxies and demonstrate its accuracy in describing the monopole, quadrupole, and hexadecapole of the galaxy density field down to scales of k = 0.4 h Mpc{sup −1}. The model describes the clustering of galaxies in the context of a halo model and the clustering of the underlying halos in redshift space using a combination of Eulerian perturbation theory and N -body simulations. The modeling of redshift-space distortions is done using the so-called distribution function approach. The final model has 13 free parameters, and each parameter is physically motivated rather thanmore » a nuisance parameter, which allows the use of well-motivated priors. We account for the Finger-of-God effect from centrals and both isolated and non-isolated satellites rather than using a single velocity dispersion to describe the combined effect. We test and validate the accuracy of the model on several sets of high-fidelity N -body simulations, as well as realistic mock catalogs designed to simulate the BOSS DR12 CMASS data set. The suite of simulations covers a range of cosmologies and galaxy bias models, providing a rigorous test of the level of theoretical systematics present in the model. The level of bias in the recovered values of f σ{sub 8} is found to be small. When including scales to k = 0.4 h Mpc{sup −1}, we find 15-30% gains in the statistical precision of f σ{sub 8} relative to k = 0.2 h Mpc{sup −1} and a roughly 10–15% improvement for the perpendicular Alcock-Paczynski parameter α{sub ⊥}. Using the BOSS DR12 CMASS mocks as a benchmark for comparison, we estimate an uncertainty on f σ{sub 8} that is ∼10–20% larger than other similar Fourier-space RSD models in the literature that use k ≤ 0.2 h Mpc{sup −1}, suggesting that these models likely have a too-limited parametrization.« less
NASA Astrophysics Data System (ADS)
Santabarbara, Ignacio; Haas, Edwin; Kraus, David; Herrera, Saul; Klatt, Steffen; Kiese, Ralf
2014-05-01
When using biogeochemical models to estimate greenhouse gas emissions at site to regional/national levels, the assessment and quantification of the uncertainties of simulation results are of significant importance. The uncertainties in simulation results of process-based ecosystem models may result from uncertainties of the process parameters that describe the processes of the model, model structure inadequacy as well as uncertainties in the observations. Data for development and testing of uncertainty analisys were corp yield observations, measurements of soil fluxes of nitrous oxide (N2O) and carbon dioxide (CO2) from 8 arable sites across Europe. Using the process-based biogeochemical model LandscapeDNDC for simulating crop yields, N2O and CO2 emissions, our aim is to assess the simulation uncertainty by setting up a Bayesian framework based on Metropolis-Hastings algorithm. Using Gelman statistics convergence criteria and parallel computing techniques, enable multi Markov Chains to run independently in parallel and create a random walk to estimate the joint model parameter distribution. Through means distribution we limit the parameter space, get probabilities of parameter values and find the complex dependencies among them. With this parameter distribution that determines soil-atmosphere C and N exchange, we are able to obtain the parameter-induced uncertainty of simulation results and compare them with the measurements data.
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times required by human intervention and analysis. Specifying an objective functional that quantifies the misfit between the simulation outcome and known constraints and then minimizing it through numerical optimization can serve as an automated technique for parameter identification. As suggested by the similarity in formulation, the numerical algorithm is closely related to the one used for goal-oriented error estimation. One common point is that the so-called adjoint equation needs to be solved numerically. We will outline the derivation and implementation of these methods and discuss some of their pros and cons, supported by numerical results.
Identification of time-varying structural dynamic systems - An artificial intelligence approach
NASA Technical Reports Server (NTRS)
Glass, B. J.; Hanagud, S.
1992-01-01
An application of the artificial intelligence-derived methodologies of heuristic search and object-oriented programming to the problem of identifying the form of the model and the associated parameters of a time-varying structural dynamic system is presented in this paper. Possible model variations due to changes in boundary conditions or configurations of a structure are organized into a taxonomy of models, and a variant of best-first search is used to identify the model whose simulated response best matches that of the current physical structure. Simulated model responses are verified experimentally. An output-error approach is used in a discontinuous model space, and an equation-error approach is used in the parameter space. The advantages of the AI methods used, compared with conventional programming techniques for implementing knowledge structuring and inheritance, are discussed. Convergence conditions and example problems have been discussed. In the example problem, both the time-varying model and its new parameters have been identified when changes occur.
1984-12-30
as three dimensional, when the assumption is made that all SUTRA parameters and coefficients have a constant value in the third space direction. A...finite element. The type of element employed by SUTRA for two-dimensional simulation is a quadrilateral which has a finite thickness in the third ... space dimension. This type of a quad- rilateral element and a typical two-dimensional mesh is shown in Figure 3.1. - All twelve edges of the two
Estimating free-body modal parameters from tests of a constrained structure
NASA Technical Reports Server (NTRS)
Cooley, Victor M.
1993-01-01
Hardware advances in suspension technology for ground tests of large space structures provide near on-orbit boundary conditions for modal testing. Further advances in determining free-body modal properties of constrained large space structures have been made, on the analysis side, by using time domain parameter estimation and perturbing the stiffness of the constraints over multiple sub-tests. In this manner, passive suspension constraint forces, which are fully correlated and therefore not usable for spectral averaging techniques, are made effectively uncorrelated. The technique is demonstrated with simulated test data.
Future missions for observing Earth's changing gravity field: a closed-loop simulation tool
NASA Astrophysics Data System (ADS)
Visser, P. N.
2008-12-01
The GRACE mission has successfully demonstrated the observation from space of the changing Earth's gravity field at length and time scales of typically 1000 km and 10-30 days, respectively. Many scientific communities strongly advertise the need for continuity of observing Earth's gravity field from space. Moreover, a strong interest is being expressed to have gravity missions that allow a more detailed sampling of the Earth's gravity field both in time and in space. Designing a gravity field mission for the future is a complicated process that involves making many trade-offs, such as trade-offs between spatial, temporal resolution and financial budget. Moreover, it involves the optimization of many parameters, such as orbital parameters (height, inclination), distinction between which gravity sources to observe or correct for (for example are gravity changes due to ocean currents a nuisance or a signal to be retrieved?), observation techniques (low-low satellite-to-satellite tracking, satellite gravity gradiometry, accelerometers), and satellite control systems (drag-free?). A comprehensive tool has been developed and implemented that allows the closed-loop simulation of gravity field retrievals for different satellite mission scenarios. This paper provides a description of this tool. Moreover, its capabilities are demonstrated by a few case studies. Acknowledgments. The research that is being done with the closed-loop simulation tool is partially funded by the European Space Agency (ESA). An important component of the tool is the GEODYN software, kindly provided by NASA Goddard Space Flight Center in Greenbelt, Maryland.
NASA Astrophysics Data System (ADS)
Shukla, Hemant; Bonissent, Alain
2017-04-01
We present the parameterized simulation of an integral-field unit (IFU) slicer spectrograph and its applications in spectroscopic studies, namely, for probing dark energy with type Ia supernovae. The simulation suite is called the fast-slicer IFU simulator (FISim). The data flow of FISim realistically models the optics of the IFU along with the propagation effects, including cosmological, zodiacal, instrumentation and detector effects. FISim simulates the spectrum extraction by computing the error matrix on the extracted spectrum. The applications for Type Ia supernova spectroscopy are used to establish the efficacy of the simulator in exploring the wider parametric space, in order to optimize the science and mission requirements. The input spectral models utilize the observables such as the optical depth and velocity of the Si II absorption feature in the supernova spectrum as the measured parameters for various studies. Using FISim, we introduce a mechanism for preserving the complete state of a system, called the partial p/partial f matrix, which allows for compression, reconstruction and spectrum extraction, we introduce a novel and efficient method for spectrum extraction, called super-optimal spectrum extraction, and we conduct various studies such as the optimal point spread function, optimal resolution, parameter estimation, etc. We demonstrate that for space-based telescopes, the optimal resolution lies in the region near R ˜ 117 for read noise of 1 e- and 7 e- using a 400 km s-1 error threshold on the Si II velocity.
Building Better Planet Populations for EXOSIMS
NASA Astrophysics Data System (ADS)
Garrett, Daniel; Savransky, Dmitry
2018-01-01
The Exoplanet Open-Source Imaging Mission Simulator (EXOSIMS) software package simulates ensembles of space-based direct imaging surveys to provide a variety of science and engineering yield distributions for proposed mission designs. These mission simulations rely heavily on assumed distributions of planetary population parameters including semi-major axis, planetary radius, eccentricity, albedo, and orbital orientation to provide heuristics for target selection and to simulate planetary systems for detection and characterization. The distributions are encoded in PlanetPopulation modules within EXOSIMS which are selected by the user in the input JSON script when a simulation is run. The earliest written PlanetPopulation modules available in EXOSIMS are based on planet population models where the planetary parameters are considered to be independent from one another. While independent parameters allow for quick computation of heuristics and sampling for simulated planetary systems, results from planet-finding surveys have shown that many parameters (e.g., semi-major axis/orbital period and planetary radius) are not independent. We present new PlanetPopulation modules for EXOSIMS which are built on models based on planet-finding survey results where semi-major axis and planetary radius are not independent and provide methods for sampling their joint distribution. These new modules enhance the ability of EXOSIMS to simulate realistic planetary systems and give more realistic science yield distributions.
Liu, Jian; Liu, Kexin; Liu, Shutang
2017-01-01
In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results. PMID:28467431
Liu, Jian; Liu, Kexin; Liu, Shutang
2017-01-01
In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results.
He, Yi; Xiao, Yi; Liwo, Adam; Scheraga, Harold A
2009-10-01
We explored the energy-parameter space of our coarse-grained UNRES force field for large-scale ab initio simulations of protein folding, to obtain good initial approximations for hierarchical optimization of the force field with new virtual-bond-angle bending and side-chain-rotamer potentials which we recently introduced to replace the statistical potentials. 100 sets of energy-term weights were generated randomly, and good sets were selected by carrying out replica-exchange molecular dynamics simulations of two peptides with a minimal alpha-helical and a minimal beta-hairpin fold, respectively: the tryptophan cage (PDB code: 1L2Y) and tryptophan zipper (PDB code: 1LE1). Eight sets of parameters produced native-like structures of these two peptides. These eight sets were tested on two larger proteins: the engrailed homeodomain (PDB code: 1ENH) and FBP WW domain (PDB code: 1E0L); two sets were found to produce native-like conformations of these proteins. These two sets were tested further on a larger set of nine proteins with alpha or alpha + beta structure and found to locate native-like structures of most of them. These results demonstrate that, in addition to finding reasonable initial starting points for optimization, an extensive search of parameter space is a powerful method to produce a transferable force field. Copyright 2009 Wiley Periodicals, Inc.
From diffusion pumps to cryopumps: The conversion of GSFC's space environment simulator
NASA Technical Reports Server (NTRS)
Cary, Ron
1992-01-01
The SES (Space Environmental Simulator), largest of the Thermal Vacuum Facilities at The Goddard Space Flight Center, recently was converted from an oil diffusion pumped chamber to a Cryopumped chamber. This modification was driven by requirements of flight projects. The basic requirement was to retain or enhance the operational parameters of the chamber such as pumping speed, ultimate vacuum, pump down time, and thermal system performance. To accomplish this task, seventeen diffusion pumps were removed and replaced with eight 1.2 meter (48 inch) diameter cryopumps and one 0.5 meter (20 inch) turbomolecular pump. The conversion was accomplished with a combination of subcontracting and in-house efforts to maximize the efficiency of implementation.
NASA Technical Reports Server (NTRS)
Liu, F. C.
1986-01-01
The objective of this investigation is to make analytical determination of the acceleration produced by crew motion in an orbiting space station and define design parameters for the suspension system of microgravity experiments. A simple structural model for simulation of the IOC space station is proposed. Mathematical formulation of this model provides the engineers a simple and direct tool for designing an effective suspension system.
Scaling behavior of immersed granular flows
NASA Astrophysics Data System (ADS)
Amarsid, L.; Delenne, J.-Y.; Mutabaruka, P.; Monerie, Y.; Perales, F.; Radjai, F.
2017-06-01
The shear behavior of granular materials immersed in a viscous fluid depends on fluid properties (viscosity, density), particle properties (size, density) and boundary conditions (shear rate, confining pressure). Using computational fluid dynamics simulations coupled with molecular dynamics for granular flow, and exploring a broad range of the values of parameters, we show that the parameter space can be reduced to a single parameter that controls the packing fraction and effective friction coefficient. This control parameter is a modified inertial number that incorporates viscous effects.
NASA Astrophysics Data System (ADS)
Essen, Jonathan; Ruiz-Garcia, Miguel; Jenkins, Ian; Carretero, Manuel; Bonilla, Luis L.; Birnir, Björn
2018-04-01
We explore the design parameter space of short (5-25 period), n-doped, Ga/(Al,Ga)As semiconductor superlattices (SSLs) in the sequential resonant tunneling regime. We consider SSLs at cool (77 K) and warm (295 K) temperatures, simulating the electronic response to variations in (a) the number of SSL periods, (b) the contact conductivity, and (c) the strength of disorder (aperiodicities). Our analysis shows that the chaotic dynamical phases exist on a number of sub-manifolds of codimension zero within the design parameter space. This result provides an encouraging guide towards the experimental observation of high-frequency intrinsic dynamical chaos in shorter SSLs.
Parallel stochastic simulation of macroscopic calcium currents.
González-Vélez, Virginia; González-Vélez, Horacio
2007-06-01
This work introduces MACACO, a macroscopic calcium currents simulator. It provides a parameter-sweep framework which computes macroscopic Ca(2+) currents from the individual aggregation of unitary currents, using a stochastic model for L-type Ca(2+) channels. MACACO uses a simplified 3-state Markov model to simulate the response of each Ca(2+) channel to different voltage inputs to the cell. In order to provide an accurate systematic view for the stochastic nature of the calcium channels, MACACO is composed of an experiment generator, a central simulation engine and a post-processing script component. Due to the computational complexity of the problem and the dimensions of the parameter space, the MACACO simulation engine employs a grid-enabled task farm. Having been designed as a computational biology tool, MACACO heavily borrows from the way cell physiologists conduct and report their experimental work.
Detonation initiation in a model of explosive: Comparative atomistic and hydrodynamics simulations
NASA Astrophysics Data System (ADS)
Murzov, S. A.; Sergeev, O. V.; Dyachkov, S. A.; Egorova, M. S.; Parshikov, A. N.; Zhakhovsky, V. V.
2016-11-01
Here we extend consistent simulations to reactive materials by the example of AB model explosive. The kinetic model of chemical reactions observed in a molecular dynamics (MD) simulation of self-sustained detonation wave can be used in hydrodynamic simulation of detonation initiation. Kinetic coefficients are obtained by minimization of difference between profiles of species calculated from the kinetic model and observed in MD simulations of isochoric thermal decomposition with a help of downhill simplex method combined with random walk in multidimensional space of fitting kinetic model parameters.
Space Communications and Navigation (SCaN) Network Simulation Tool Development and Its Use Cases
NASA Technical Reports Server (NTRS)
Jennings, Esther; Borgen, Richard; Nguyen, Sam; Segui, John; Stoenescu, Tudor; Wang, Shin-Ywan; Woo, Simon; Barritt, Brian; Chevalier, Christine; Eddy, Wesley
2009-01-01
In this work, we focus on the development of a simulation tool to assist in analysis of current and future (proposed) network architectures for NASA. Specifically, the Space Communications and Navigation (SCaN) Network is being architected as an integrated set of new assets and a federation of upgraded legacy systems. The SCaN architecture for the initial missions for returning humans to the moon and beyond will include the Space Network (SN) and the Near-Earth Network (NEN). In addition to SCaN, the initial mission scenario involves a Crew Exploration Vehicle (CEV), the International Space Station (ISS) and NASA Integrated Services Network (NISN). We call the tool being developed the SCaN Network Integration and Engineering (SCaN NI&E) Simulator. The intended uses of such a simulator are: (1) to characterize performance of particular protocols and configurations in mission planning phases; (2) to optimize system configurations by testing a larger parameter space than may be feasible in either production networks or an emulated environment; (3) to test solutions in order to find issues/risks before committing more significant resources needed to produce real hardware or flight software systems. We describe two use cases of the tool: (1) standalone simulation of CEV to ISS baseline scenario to determine network performance, (2) participation in Distributed Simulation Integration Laboratory (DSIL) tests to perform function testing and verify interface and interoperability of geographically dispersed simulations/emulations.
Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu
2018-05-01
In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.
Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio
2011-12-01
The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.
Simulation of plasma loading of high-pressure RF cavities
NASA Astrophysics Data System (ADS)
Yu, K.; Samulyak, R.; Yonehara, K.; Freemire, B.
2018-01-01
Muon beam-induced plasma loading of radio-frequency (RF) cavities filled with high pressure hydrogen gas with 1% dry air dopant has been studied via numerical simulations. The electromagnetic code SPACE, that resolves relevant atomic physics processes, including ionization by the muon beam, electron attachment to dopant molecules, and electron-ion and ion-ion recombination, has been used. Simulations studies have been performed in the range of parameters typical for practical muon cooling channels.
Population Synthesis of Radio and Y-ray Normal, Isolated Pulsars Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2013-04-01
We present preliminary results of a population statistics study of normal pulsars (NP) from the Galactic disk using Markov Chain Monte Carlo techniques optimized according to two different methods. The first method compares the detected and simulated cumulative distributions of series of pulsar characteristics, varying the model parameters to maximize the overall agreement. The advantage of this method is that the distributions do not have to be binned. The other method varies the model parameters to maximize the log of the maximum likelihood obtained from the comparisons of four-two dimensional distributions of radio and γ-ray pulsar characteristics. The advantage of this method is that it provides a confidence region of the model parameter space. The computer code simulates neutron stars at birth using Monte Carlo procedures and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and γ-ray emission characteristics, implementing an empirical γ-ray luminosity model. A comparison group of radio NPs detected in ten-radio surveys is used to normalize the simulation, adjusting the model radio luminosity to match a birth rate. We include the Fermi pulsars in the forthcoming second pulsar catalog. We present preliminary results comparing the simulated and detected distributions of radio and γ-ray NPs along with a confidence region in the parameter space of the assumed models. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.
Two particle model for studying the effects of space-charge force on strong head-tail instabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Yong Ho; Chao, Alexander Wu; Blaskiewicz, Michael M.
In this paper, we present a new two particle model for studying the strong head-tail instabilities in the presence of the space-charge force. It is a simple expansion of the well-known two particle model for strong head-tail instability and is still analytically solvable. No chromaticity effect is included. It leads to a formula for the growth rate as a function of the two dimensionless parameters: the space-charge tune shift parameter (normalized by the synchrotron tune) and the wakefield strength, Upsilon. The three-dimensional contour plot of the growth rate as a function of those two dimensionless parameters reveals stopband structures. Manymore » simulation results generally indicate that a strong head-tail instability can be damped by a weak space-charge force, but the beam becomes unstable again when the space-charge force is further increased. The new two particle model indicates a similar behavior. In weak space-charge regions, additional tune shifts by the space-charge force dissolve the mode coupling. As the space-charge force is increased, they conversely restore the mode coupling, but then a further increase of the space-charge force decouples the modes again. Lastly, this mode coupling/decoupling behavior creates the stopband structures.« less
Two particle model for studying the effects of space-charge force on strong head-tail instabilities
Chin, Yong Ho; Chao, Alexander Wu; Blaskiewicz, Michael M.
2016-01-19
In this paper, we present a new two particle model for studying the strong head-tail instabilities in the presence of the space-charge force. It is a simple expansion of the well-known two particle model for strong head-tail instability and is still analytically solvable. No chromaticity effect is included. It leads to a formula for the growth rate as a function of the two dimensionless parameters: the space-charge tune shift parameter (normalized by the synchrotron tune) and the wakefield strength, Upsilon. The three-dimensional contour plot of the growth rate as a function of those two dimensionless parameters reveals stopband structures. Manymore » simulation results generally indicate that a strong head-tail instability can be damped by a weak space-charge force, but the beam becomes unstable again when the space-charge force is further increased. The new two particle model indicates a similar behavior. In weak space-charge regions, additional tune shifts by the space-charge force dissolve the mode coupling. As the space-charge force is increased, they conversely restore the mode coupling, but then a further increase of the space-charge force decouples the modes again. Lastly, this mode coupling/decoupling behavior creates the stopband structures.« less
An adaptive detector and channel estimator for deep space optical communications
NASA Technical Reports Server (NTRS)
Mukai, R.; Arabshahi, P.; Yan, T. Y.
2001-01-01
This paper will discuss the design and testing of both the channel parameter identification system, and the adaptive threshold system, and illustrate their advantages and performance under simulated channel degradation conditions.
Development of metamodels for predicting aerosol dispersion in ventilated spaces
NASA Astrophysics Data System (ADS)
Hoque, Shamia; Farouk, Bakhtier; Haas, Charles N.
2011-04-01
Artificial neural network (ANN) based metamodels were developed to describe the relationship between the design variables and their effects on the dispersion of aerosols in a ventilated space. A Hammersley sequence sampling (HSS) technique was employed to efficiently explore the multi-parameter design space and to build numerical simulation scenarios. A detailed computational fluid dynamics (CFD) model was applied to simulate these scenarios. The results derived from the CFD simulations were used to train and test the metamodels. Feed forward ANN's were developed to map the relationship between the inputs and the outputs. The predictive ability of the neural network based metamodels was compared to linear and quadratic metamodels also derived from the same CFD simulation results. The ANN based metamodel performed well in predicting the independent data sets including data generated at the boundaries. Sensitivity analysis showed that particle tracking time to residence time and the location of input and output with relation to the height of the room had more impact than the other dimensionless groups on particle behavior.
Effects of behavioral patterns and network topology structures on Parrondo’s paradox
Ye, Ye; Cheong, Kang Hao; Cen, Yu-wan; Xie, Neng-gang
2016-01-01
A multi-agent Parrondo’s model based on complex networks is used in the current study. For Parrondo’s game A, the individual interaction can be categorized into five types of behavioral patterns: the Matthew effect, harmony, cooperation, poor-competition-rich-cooperation and a random mode. The parameter space of Parrondo’s paradox pertaining to each behavioral pattern, and the gradual change of the parameter space from a two-dimensional lattice to a random network and from a random network to a scale-free network was analyzed. The simulation results suggest that the size of the region of the parameter space that elicits Parrondo’s paradox is positively correlated with the heterogeneity of the degree distribution of the network. For two distinct sets of probability parameters, the microcosmic reasons underlying the occurrence of the paradox under the scale-free network are elaborated. Common interaction mechanisms of the asymmetric structure of game B, behavioral patterns and network topology are also revealed. PMID:27845430
Effects of behavioral patterns and network topology structures on Parrondo’s paradox
NASA Astrophysics Data System (ADS)
Ye, Ye; Cheong, Kang Hao; Cen, Yu-Wan; Xie, Neng-Gang
2016-11-01
A multi-agent Parrondo’s model based on complex networks is used in the current study. For Parrondo’s game A, the individual interaction can be categorized into five types of behavioral patterns: the Matthew effect, harmony, cooperation, poor-competition-rich-cooperation and a random mode. The parameter space of Parrondo’s paradox pertaining to each behavioral pattern, and the gradual change of the parameter space from a two-dimensional lattice to a random network and from a random network to a scale-free network was analyzed. The simulation results suggest that the size of the region of the parameter space that elicits Parrondo’s paradox is positively correlated with the heterogeneity of the degree distribution of the network. For two distinct sets of probability parameters, the microcosmic reasons underlying the occurrence of the paradox under the scale-free network are elaborated. Common interaction mechanisms of the asymmetric structure of game B, behavioral patterns and network topology are also revealed.
Simulation Exploration through Immersive Parallel Planes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
Simulation Exploration through Immersive Parallel Planes: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
A Transportation Model for a Space Colonization and Manufacturing System: A Q-GERT Simulation.
1982-12-01
34 .- •..................................."............ ;,,,=, ; ,, =,..,t , .. =-- j -’ - 24. Heppenheimer , Thomas A . rnLnnies in Space. Harrisburg, Pa...Colonel Thomas D. Clark. Captain John D. Rask, my co-worker on that project, and I developed a simple model for the transportation system during this...K. O’Neill and Thomas A . Heppenhiemer. (An example of a Delphi for a space problem is given in Ref 8.) Some of the parameters needing 78 .* better, or
An integrated control scheme for space robot after capturing non-cooperative target
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Yuan, Jianping; Walter, Ulrich
2018-06-01
How to identify the mass properties and eliminate the unknown angular momentum of space robotic system after capturing a non-cooperative target is of great challenge. This paper focuses on designing an integrated control framework which includes detumbling strategy, coordination control and parameter identification. Firstly, inverted and forward chain approaches are synthesized for space robot to obtain dynamic equation in operational space. Secondly, a detumbling strategy is introduced using elementary functions with normalized time, while the imposed end-effector constraints are considered. Next, a coordination control scheme for stabilizing both base and end-effector based on impedance control is implemented with the target's parameter uncertainty. With the measurements of the forces and torques exerted on the target, its mass properties are estimated during the detumbling process accordingly. Simulation results are presented using a 7 degree-of-freedom kinematically redundant space manipulator, which verifies the performance and effectiveness of the proposed method.
Methods of Helium Injection and Removal for Heat Transfer Augmentation
NASA Technical Reports Server (NTRS)
Haight, Harlan; Kegley, Jeff; Bourdreaux, Meghan
2008-01-01
While augmentation of heat transfer from a test article by helium gas at low pressures is well known, the method is rarely employed during space simulation testing because the test objectives usually involve simulation of an orbital thermal environment. Test objectives of cryogenic optical testing at Marshall Space Flight Center's X-ray Cryogenic Facility (XRCF) have typically not been constrained by orbital environment parameters. As a result, several methods of helium injection have been utilized at the XRCF since 1999 to decrease thermal transition times. A brief synopsis of these injection (and removal) methods including will be presented.
Methods of Helium Injection and Removal for Heat Transfer Augmentation
NASA Technical Reports Server (NTRS)
Kegley, Jeffrey
2008-01-01
While augmentation of heat transfer from a test article by helium gas at low pressures is well known, the method is rarely employed during space simulation testing because the test objectives are to simulate an orbital thermal environment. Test objectives of cryogenic optical testing at Marshall Space Flight Center's X-ray Calibration Facility (XRCF) have typically not been constrained by orbital environment parameters. As a result, several methods of helium injection have been utilized at the XRCF since 1999 to decrease thermal transition times. A brief synopsis of these injection (and removal) methods including will be presented.
Regression techniques for oceanographic parameter retrieval using space-borne microwave radiometry
NASA Technical Reports Server (NTRS)
Hofer, R.; Njoku, E. G.
1981-01-01
Variations of conventional multiple regression techniques are applied to the problem of remote sensing of oceanographic parameters from space. The techniques are specifically adapted to the scanning multichannel microwave radiometer (SMRR) launched on the Seasat and Nimbus 7 satellites to determine ocean surface temperature, wind speed, and atmospheric water content. The retrievals are studied primarily from a theoretical viewpoint, to illustrate the retrieval error structure, the relative importances of different radiometer channels, and the tradeoffs between spatial resolution and retrieval accuracy. Comparisons between regressions using simulated and actual SMMR data are discussed; they show similar behavior.
Decay constants $$f_B$$ and $$f_{B_s}$$ and quark masses $$m_b$$ and $$m_c$$ from HISQ simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Komijani, J.; et al.
2016-11-22
We present a progress report on our calculation of the decay constantsmore » $$f_B$$ and $$f_{B_s}$$ from lattice-QCD simulations with highly-improved staggered quarks. Simulations are carried out with several heavy valence-quark masses on $(2+1+1)$-flavor ensembles that include charm sea quarks. We include data at six lattice spacings and several light sea-quark masses, including an approximately physical-mass ensemble at all but the smallest lattice spacing, 0.03 fm. This range of parameters provides excellent control of the continuum extrapolation to zero lattice spacing and of heavy-quark discretization errors. Finally, using the heavy-quark effective theory expansion we present a method of extracting from the same correlation functions the charm- and bottom-quark masses as well as some low-energy constants appearing in the heavy-quark expansion.« less
Changes of catecholamine excretion during long-duration confinement.
Kraft, N; Inoue, N; Ohshima, H; Sekiguchi, C
2002-06-01
Simulation studies have become the main source of data about small group interactions during prolonged isolation, from which it should be possible to anticipate crew problems during actual space missions. International Space Station (ISS) astronauts and cosmonauts will form one international crew, although living in different national modules. They will have joint flight protocols, and at the same time, fulfill a number of different tasks in accord with their national flight programs. Consistent with these concepts, we studied two simultaneously functioning groups in a simulation of ISS flight. The objective of this study was to investigate physiological parameters (such as catecholamine excretions) related to long-duration confinement in the hermetic chamber, simulating International Space Station flight conditions. We also planned to evaluate the relationship between epinephrine/norepinephrine with group dynamics and social events to predict unfavorable changes in health and work capability of the subjects related to psychological interaction in the isolation chamber.
NASA Technical Reports Server (NTRS)
Mckavitt, Thomas P., Jr.
1990-01-01
The results of an aircraft parameters identification study conducted on the National Aeronautics and Space Administration/Ames Research Center Advanced Concepts Flight Simulator (ACFS) in conjunction with the Navy-NASA Joint Institute of Aeronautics are given. The ACFS is a commercial airline simulator with a design based on future technology. The simulator is used as a laboratory for human factors research and engineering as applied to the commercial airline industry. Parametric areas examined were engine pressure ratio (EPR), optimum long range cruise Mach number, flap reference speed, and critical take-off speeds. Results were compared with corresponding parameters of the Boeing 757 and 767 aircraft. This comparison identified two areas where improvements can be made: (1) low maximum lift coefficients (on the order of 20-25 percent less than those of a 757); and (2) low optimum cruise Mach numbers. Recommendations were made to those anticipated with the application of future technologies.
NASA Technical Reports Server (NTRS)
Dermanis, A.
1977-01-01
The possibility of recovering earth rotation and network geometry (baseline) parameters are emphasized. The numerical simulated experiments performed are set up in an environment where station coordinates vary with respect to inertial space according to a simulated earth rotation model similar to the actual but unknown rotation of the earth. The basic technique of VLBI and its mathematical model are presented. The parametrization of earth rotation chosen is described and the resulting model is linearized. A simple analysis of the geometry of the observations leads to some useful hints on achieving maximum sensitivity of the observations with respect to the parameters considered. The basic philosophy for the simulation of data and their analysis through standard least squares adjustment techniques is presented. A number of characteristic network designs based on present and candidate station locations are chosen. The results of the simulations for each design are presented together with a summary of the conclusions.
A method to investigate the diffusion properties of nuclear calcium.
Queisser, Gillian; Wittum, Gabriel
2011-10-01
Modeling biophysical processes in general requires knowledge about underlying biological parameters. The quality of simulation results is strongly influenced by the accuracy of these parameters, hence the identification of parameter values that the model includes is a major part of simulating biophysical processes. In many cases, secondary data can be gathered by experimental setups, which are exploitable by mathematical inverse modeling techniques. Here we describe a method for parameter identification of diffusion properties of calcium in the nuclei of rat hippocampal neurons. The method is based on a Gauss-Newton method for solving a least-squares minimization problem and was formulated in such a way that it is ideally implementable in the simulation platform uG. Making use of independently published space- and time-dependent calcium imaging data, generated from laser-assisted calcium uncaging experiments, here we could identify the diffusion properties of nuclear calcium and were able to validate a previously published model that describes nuclear calcium dynamics as a diffusion process.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650
Effective Hubbard model for Helium atoms adsorbed on a graphite
NASA Astrophysics Data System (ADS)
Motoyama, Yuichi; Masaki-Kato, Akiko; Kawashima, Naoki
Helium atoms adsorbed on a graphite is a two-dimensional strongly correlated quantum system and it has been an attractive subject of research for a long time. A helium atom feels Lennard-Jones like potential (Aziz potential) from another one and corrugated potential from the graphite. Therefore, this system may be described by a hardcore Bose Hubbard model with the nearest neighbor repulsion on the triangular lattice, which is the dual lattice of the honeycomb lattice formed by carbons. A Hubbard model is easier to simulate than the original problem in continuous space, but we need to know the model parameters of the effective model, hopping constant t and interaction V. In this presentation, we will present an estimation of the model parameters from ab initio quantum Monte Carlo calculation in continuous space in addition to results of quantum Monte Carlo simulation for an obtained discrete model.
An Astrobiological Experiment to Explore the Habitability of Tidally Locked M-Dwarf Planets
NASA Astrophysics Data System (ADS)
Angerhausen, Daniel; Sapers, Haley; Simoncini, Eugenio; Lutz, Stefanie; Alexandre, Marcelo da Rosa; Galante, Douglas
2014-04-01
We present a summary of a three-year academic research proposal drafted during the Sao Paulo Advanced School of Astrobiology (SPASA) to prepare for upcoming observations of tidally locked planets orbiting M-dwarf stars. The primary experimental goal of the suggested research is to expose extremophiles from analogue environments to a modified space simulation chamber reproducing the environmental parameters of a tidally locked planet in the habitable zone of a late-type star. Here we focus on a description of the astronomical analysis used to define the parameters for this climate simulation.
NASA Technical Reports Server (NTRS)
Sullins, W. R., Jr.; Rogers, J. G.
1974-01-01
The kinds of activities that are attractive to man in long duration isolation are delineated considering meaningful work as major activity and a choice of leisure/living provisions. The dependent variables are the relative distribution between various work, leisure, and living activities where external constraints on the subject's freedom of choice are minimized. Results indicate that an average of at least five hours per day of significant meaningful work is required for satisfactory enjoyment of the situation; most other parameters of the situation have less effects on overall performance and satisfaction
Simulation of plasma loading of high-pressure RF cavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, K.; Samulyak, R.; Yonehara, K.
2018-01-11
Muon beam-induced plasma loading of radio-frequency (RF) cavities filled with high pressure hydrogen gas with 1% dry air dopant has been studied via numerical simulations. The electromagnetic code SPACE, that resolves relevant atomic physics processes, including ionization by the muon beam, electron attachment to dopant molecules, and electron-ion and ion-ion recombination, has been used. Simulations studies have also been performed in the range of parameters typical for practical muon cooling channels.
Wu, Jibo
2016-01-01
In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.
Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.
2008-01-01
Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.
Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks
NASA Astrophysics Data System (ADS)
Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.
2015-12-01
A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.
Space technology test facilities at the NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Gross, Anthony R.; Rodrigues, Annette T.
1990-01-01
The major space research and technology test facilities at the NASA Ames Research Center are divided into five categories: General Purpose, Life Support, Computer-Based Simulation, High Energy, and the Space Exploraton Test Facilities. The paper discusses selected facilities within each of the five categories and discusses some of the major programs in which these facilities have been involved. Special attention is given to the 20-G Man-Rated Centrifuge, the Human Research Facility, the Plant Crop Growth Facility, the Numerical Aerodynamic Simulation Facility, the Arc-Jet Complex and Hypersonic Test Facility, the Infrared Detector and Cryogenic Test Facility, and the Mars Wind Tunnel. Each facility is described along with its objectives, test parameter ranges, and major current programs and applications.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Fasanella, Edwin L.; Melis, Matthew; Carney, Kelly; Gabrys, Jonathan
2004-01-01
The Space Shuttle Columbia Accident Investigation Board (CAIB) made several recommendations for improving the NASA Space Shuttle Program. An extensive experimental and analytical program has been developed to address two recommendations related to structural impact analysis. The objective of the present work is to demonstrate the application of probabilistic analysis to assess the effect of uncertainties on debris impacts on Space Shuttle Reinforced Carbon-Carbon (RCC) panels. The probabilistic analysis is used to identify the material modeling parameters controlling the uncertainty. A comparison of the finite element results with limited experimental data provided confidence that the simulations were adequately representing the global response of the material. Five input parameters were identified as significantly controlling the response.
NASA Technical Reports Server (NTRS)
Gasiewski, A. J.; Skofronick, G. M.
1992-01-01
Progress by investigators at Georgia Tech in defining the requirements for large space antennas for passive microwave Earth imaging systems is reviewed. In order to determine antenna constraints (e.g., the aperture size, illumination taper, and gain uncertainty limits) necessary for the retrieval of geophysical parameters (e.g., rain rate) with adequate spatial resolution and accuracy, a numerical simulation of the passive microwave observation and retrieval process is being developed. Due to the small spatial scale of precipitation and the nonlinear relationships between precipitation parameters (e.g., rain rate, water density profile) and observed brightness temperatures, the retrieval of precipitation parameters are of primary interest in the simulation studies. Major components of the simulation are described as well as progress and plans for completion. The overall goal of providing quantitative assessments of the accuracy of candidate geosynchronous and low-Earth orbiting imaging systems will continue under a separate grant.
Efficient Schmidt number scaling in dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Krafnick, Ryan C.; García, Angel E.
2015-12-01
Dissipative particle dynamics is a widely used mesoscale technique for the simulation of hydrodynamics (as well as immersed particles) utilizing coarse-grained molecular dynamics. While the method is capable of describing any fluid, the typical choice of the friction coefficient γ and dissipative force cutoff rc yields an unacceptably low Schmidt number Sc for the simulation of liquid water at standard temperature and pressure. There are a variety of ways to raise Sc, such as increasing γ and rc, but the relative cost of modifying each parameter (and the concomitant impact on numerical accuracy) has heretofore remained undetermined. We perform a detailed search over the parameter space, identifying the optimal strategy for the efficient and accuracy-preserving scaling of Sc, using both numerical simulations and theoretical predictions. The composite results recommend a parameter choice that leads to a speed improvement of a factor of three versus previously utilized strategies.
NASA Astrophysics Data System (ADS)
Housaindokht, Mohammad Reza; Moosavi, Fatemeh
2018-06-01
The effect of magnetization on the properties of a system containing a peptide model is studied by molecular dynamics simulation at a range of 298-318 K. Two mole fractions of 0.001 and 0.002 of peptide were simulated and the variation of hydrogen bond number, orientational ordering parameter, gyration radius, mean square displacement, as well as radial distribution function, were under consideration. The results show that applying magnetic field will increase the number of hydrogen bonds between water molecules by clustering them and decreases the interaction of water and peptide. This reduction may cause more available free space and enhance the movement of the peptide. As a result, the diffusion coefficient of the peptide becomes greater and its conformation changes. Orientational ordering parameter besides radius of gyration demonstrates that peptide is expanded by static magnetic field and its orientational ordering parameter is affected.
Construction of multi-functional open modulized Matlab simulation toolbox for imaging ladar system
NASA Astrophysics Data System (ADS)
Wu, Long; Zhao, Yuan; Tang, Meng; He, Jiang; Zhang, Yong
2011-06-01
Ladar system simulation is to simulate the ladar models using computer simulation technology in order to predict the performance of the ladar system. This paper presents the developments of laser imaging radar simulation for domestic and overseas studies and the studies of computer simulation on ladar system with different application requests. The LadarSim and FOI-LadarSIM simulation facilities of Utah State University and Swedish Defence Research Agency are introduced in details. This paper presents the low level of simulation scale, un-unified design and applications of domestic researches in imaging ladar system simulation, which are mostly to achieve simple function simulation based on ranging equations for ladar systems. Design of laser imaging radar simulation with open and modularized structure is proposed to design unified modules for ladar system, laser emitter, atmosphere models, target models, signal receiver, parameters setting and system controller. Unified Matlab toolbox and standard control modules have been built with regulated input and output of the functions, and the communication protocols between hardware modules. A simulation based on ICCD gain-modulated imaging ladar system for a space shuttle is made based on the toolbox. The simulation result shows that the models and parameter settings of the Matlab toolbox are able to simulate the actual detection process precisely. The unified control module and pre-defined parameter settings simplify the simulation of imaging ladar detection. Its open structures enable the toolbox to be modified for specialized requests. The modulization gives simulations flexibility.
NASA Technical Reports Server (NTRS)
Howell, L. W.; Kennel, H. F.
1984-01-01
The Space Telescope (ST) is subjected to charged particle strikes in its space environment. ST's onboard fine guidance sensors utilize multiplier phototubes (PMT) for attitude determination. These tubes, when subjected to charged particle strikes, generate spurious photons in the form of Cerenkov radiation and fluorescence which give rise to unwanted disturbances in the pointing of the telescope. A stochastic model for the number of these spurious photons which strike the photocathode of the multiplier phototube which in turn produce the unwanted photon noise are presented. The model is applicable to both galactic cosmic rays and charged particles trapped in the Earth's radiation belts. The model which was programmed allows for easy adaption to a wide range of particles and different parameters for the phototube of the multiplier. The probability density functions for photons noise caused by protons, alpha particles, and carbon nuclei were using thousands of simulated strikes. These distributions are used as part of an overall ST dynamics simulation. The sensitivity of the density function to changes in the window parameters was also investigated.
NASA Technical Reports Server (NTRS)
Howell, L. W.; Kennel, H. F.
1986-01-01
The Space Telescope (ST) is subjected to charged particle strikes in its space environment. ST's onboard fine guidance sensors utilize multiplier phototubes (PMT) for attitude determination. These tubes, when subjected to charged particle strikes, generate spurious photons in the form of Cerenkov radiation and fluorescence which give rise to unwanted disturbances in the pointing of the telescope. A stochastic model for the number of these spurious photons which strike the photocathodes of the multiplier phototube which in turn produce the unwanted photon noise are presented. The model is applicable to both galactic cosmic rays and charged particles trapped in the earth's radiation belts. The model which was programmed allows for easy adaption to a wide range of particles and different parameters for the phototube of the multiplier. The probability density functions for photons noise caused by protons, alpha particles, and carbon nuclei were using thousands of simulated strikes. These distributions are used as part of an overall ST dynamics simulation. The sensitivity of the density function to changes in the window parameters was also investigated.
Numerical modeling of space-time wave extremes using WAVEWATCH III
NASA Astrophysics Data System (ADS)
Barbariol, Francesco; Alves, Jose-Henrique G. M.; Benetazzo, Alvise; Bergamasco, Filippo; Bertotti, Luciana; Carniel, Sandro; Cavaleri, Luigi; Y. Chao, Yung; Chawla, Arun; Ricchi, Antonio; Sclavo, Mauro; Tolman, Hendrik
2017-04-01
A novel implementation of parameters estimating the space-time wave extremes within the spectral wave model WAVEWATCH III (WW3) is presented. The new output parameters, available in WW3 version 5.16, rely on the theoretical model of Fedele (J Phys Oceanogr 42(9):1601-1615, 2012) extended by Benetazzo et al. (J Phys Oceanogr 45(9):2261-2275, 2015) to estimate the maximum second-order nonlinear crest height over a given space-time region. In order to assess the wave height associated to the maximum crest height and the maximum wave height (generally different in a broad-band stormy sea state), the linear quasi-determinism theory of Boccotti (2000) is considered. The new WW3 implementation is tested by simulating sea states and space-time extremes over the Mediterranean Sea (forced by the wind fields produced by the COSMO-ME atmospheric model). Model simulations are compared to space-time wave maxima observed on March 10th, 2014, in the northern Adriatic Sea (Italy), by a stereo camera system installed on-board the "Acqua Alta" oceanographic tower. Results show that modeled space-time extremes are in general agreement with observations. Differences are mostly ascribed to the accuracy of the wind forcing and, to a lesser extent, to the approximations introduced in the space-time extremes parameterizations. Model estimates are expected to be even more accurate over areas larger than the mean wavelength (for instance, the model grid size).
Effects of space environment on composites: An analytical study of critical experimental parameters
NASA Technical Reports Server (NTRS)
Gupta, A.; Carroll, W. F.; Moacanin, J.
1979-01-01
A generalized methodology currently employed at JPL, was used to develop an analytical model for effects of high-energy electrons and interactions between electron and ultraviolet effects. Chemical kinetic concepts were applied in defining quantifiable parameters; the need for determining short-lived transient species and their concentration was demonstrated. The results demonstrates a systematic and cost-effective means of addressing the issues and show qualitative and quantitative, applicable relationships between space radiation and simulation parameters. An equally important result is identification of critical initial experiments necessary to further clarify the relationships. Topics discussed include facility and test design; rastered vs. diffuse continuous e-beam; valid acceleration level; simultaneous vs. sequential exposure to different types of radiation; and interruption of test continuity.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2016-10-01
Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.
Exploring information transmission in gene networks using stochastic simulation and machine learning
NASA Astrophysics Data System (ADS)
Park, Kyemyung; Prüstel, Thorsten; Lu, Yong; Narayanan, Manikandan; Martins, Andrew; Tsang, John
How gene regulatory networks operate robustly despite environmental fluctuations and biochemical noise is a fundamental question in biology. Mathematically the stochastic dynamics of a gene regulatory network can be modeled using chemical master equation (CME), but nonlinearity and other challenges render analytical solutions of CMEs difficult to attain. While approaches of approximation and stochastic simulation have been devised for simple models, obtaining a more global picture of a system's behaviors in high-dimensional parameter space without simplifying the system substantially remains a major challenge. Here we present a new framework for understanding and predicting the behaviors of gene regulatory networks in the context of information transmission among genes. Our approach uses stochastic simulation of the network followed by machine learning of the mapping between model parameters and network phenotypes such as information transmission behavior. We also devised ways to visualize high-dimensional phase spaces in intuitive and informative manners. We applied our approach to several gene regulatory circuit motifs, including both feedback and feedforward loops, to reveal underexplored aspects of their operational behaviors. This work is supported by the Intramural Program of NIAID/NIH.
Turbulent flow in a 180 deg bend: Modeling and computations
NASA Technical Reports Server (NTRS)
Kaul, Upender K.
1989-01-01
A low Reynolds number k-epsilon turbulence model was presented which yields accurate predictions of the kinetic energy near the wall. The model is validated with the experimental channel flow data of Kreplin and Eckelmann. The predictions are also compared with earlier results from direct simulation of turbulent channel flow. The model is especially useful for internal flows where the inflow boundary condition of epsilon is not easily prescribed. The model partly derives from some observations based on earlier direct simulation results of near-wall turbulence. The low Reynolds number turbulence model together with an existing curvature correction appropriate to spinning cylinder flows was used to simulate the flow in a U-bend with the same radius of curvature as the Space Shuttle Main Engine (SSME) Turn-Around Duct (TAD). The present computations indicate a space varying curvature correction parameter as opposed to a constant parameter as used in the spinning cylinder flows. Comparison with limited available experimental data is made. The comparison is favorable, but detailed experimental data is needed to further improve the curvature model.
NASA Astrophysics Data System (ADS)
Daya Sagar, B. S.
2005-01-01
Spatio-temporal patterns of small water bodies (SWBs) under the influence of temporally varied stream flow discharge are simulated in discrete space by employing geomorphologically realistic expansion and contraction transformations. Cascades of expansion-contraction are systematically performed by synchronizing them with stream flow discharge simulated via the logistic map. Templates with definite characteristic information are defined from stream flow discharge pattern as the basis to model the spatio-temporal organization of randomly situated surface water bodies of various sizes and shapes. These spatio-temporal patterns under varied parameters (λs) controlling stream flow discharge patterns are characterized by estimating their fractal dimensions. At various λs, nonlinear control parameters, we show the union of boundaries of water bodies that traverse the water body and non-water body spaces as geomorphic attractors. The computed fractal dimensions of these attractors are 1.58, 1.53, 1.78, 1.76, 1.84, and 1.90, respectively, at λs of 1, 2, 3, 3.46, 3.57, and 3.99. These values are in line with general visual observations.
Stochastic dynamics and logistic population growth
NASA Astrophysics Data System (ADS)
Méndez, Vicenç; Assaf, Michael; Campos, Daniel; Horsthemke, Werner
2015-06-01
The Verhulst model is probably the best known macroscopic rate equation in population ecology. It depends on two parameters, the intrinsic growth rate and the carrying capacity. These parameters can be estimated for different populations and are related to the reproductive fitness and the competition for limited resources, respectively. We investigate analytically and numerically the simplest possible microscopic scenarios that give rise to the logistic equation in the deterministic mean-field limit. We provide a definition of the two parameters of the Verhulst equation in terms of microscopic parameters. In addition, we derive the conditions for extinction or persistence of the population by employing either the momentum-space spectral theory or the real-space Wentzel-Kramers-Brillouin approximation to determine the probability distribution function and the mean time to extinction of the population. Our analytical results agree well with numerical simulations.
Evaluation of powertrain solutions for future tactical truck vehicle systems
NASA Astrophysics Data System (ADS)
Pisu, Pierluigi; Cantemir, Codrin-Gruie; Dembski, Nicholas; Rizzoni, Giorgio; Serrao, Lorenzo; Josephson, John R.; Russell, James
2006-05-01
The article presents the results of a large scale design space exploration for the hybridization of two off-road vehicles, part of the Future Tactical Truck System (FTTS) family: Maneuver Sustainment Vehicle (MSV) and Utility Vehicle (UV). Series hybrid architectures are examined. The objective of the paper is to illustrate a novel design methodology that allows for the choice of the optimal values of several vehicle parameters. The methodology consists in an extensive design space exploration, which involves running a large number of computer simulations with systematically varied vehicle design parameters, where each variant is paced through several different mission profiles, and multiple attributes of performance are measured. The resulting designs are filtered to choose the design tradeoffs that better satisfy the performance and fuel economy requirements. At the end, few promising vehicle configuration designs will be selected that will need additional detailed investigation including neglected metrics like ride and drivability. Several powertrain architectures have been simulated. The design parameters include the number of axles in the vehicle (2 or 3), the number of electric motors per axle (1 or 2), the type of internal combustion engine, the type and quantity of energy storage system devices (batteries, electrochemical capacitors or both together). An energy management control strategy has also been developed to provide efficiency and performance. The control parameters are tunable and have been included into the design space exploration. The results show that the internal combustion engine and the energy storage system devices are extremely important for the vehicle performance.
The structure and dynamics of tornado-like vortices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nolan, D.S.; Farrell, B.F.
The structure and dynamics of axisymmetric tornado-like vortices are explored with a numerical model of axisymmetric incompressible flow based on recently developed numerical methods. The model is first shown to compare favorably with previous results and is then used to study the effects of varying the major parameters controlling the vortex: the strength of the convective forcing, the strength of the rotational forcing, and the magnitude of the model eddy viscosity. Dimensional analysis of the model problem indicates that the results must depend on only two dimensionless parameters. The natural choices for these two parameters are a convective Reynolds numbermore » (based on the velocity scale associated with the convective forcing) and a parameter analogous to the swirl ratio in laboratory models. However, by examining sets of simulations with different model parameters it is found that a dimensionless parameter known as the vortex Reynolds number, which is the ratio of the far-field circulation to the eddy viscosity, is more effective than the convention swirl ratio for predicting the structure of the vortex. The parameter space defined by the choices for model parameters is further explored with large sets of numerical simulations. For much of this parameter space it is confirmed that the vortex structure and time-dependent behavior depend strongly on the vortex Reynolds number and only weakly on the convective Reynolds number. The authors also find that for higher convective Reynolds numbers, the maximum possible wind speed increases, and the rotational forcing necessary to achieve that wind speed decreases. Physical reasoning is used to explain this behavior, and implications for tornado dynamics are discussed.« less
Low Velocity Earth-Penetration Test and Analysis
NASA Technical Reports Server (NTRS)
Fasanella, Edwin L.; Jones, Yvonne; Knight, Norman F., Jr.; Kellas, Sotiris
2001-01-01
Modeling and simulation of structural impacts into soil continue to challenge analysts to develop accurate material models and detailed analytical simulations to predict the soil penetration event. This paper discusses finite element modeling of a series of penetrometer drop tests into soft clay. Parametric studies are performed with penetrometers of varying diameters, masses, and impact speeds to a maximum of 45 m/s. Parameters influencing the simulation such as the contact penalty factor and the material model representing the soil are also studied. An empirical relationship between key parameters is developed and is shown to correlate experimental and analytical results quite well. The results provide preliminary design guidelines for Earth impact that may be useful for future space exploration sample return missions.
Constraining neutron guide optimizations with phase-space considerations
NASA Astrophysics Data System (ADS)
Bertelsen, Mads; Lefmann, Kim
2016-09-01
We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.
Schmidt, Julia C; Astasov-Frauenhoffer, Monika; Waltimo, Tuomas; Weiger, Roland; Walter, Clemens
2017-06-01
The objective of this study was to evaluate the efficacy of four different side-to-side toothbrushes and the impact of various brushing parameters on noncontact biofilm removal in an adjustable interdental space model. A three-species biofilm, consisting of Porphyromonas gingivalis, Fusobacterium nucleatum, and Streptococcus sanguinis, was formed in vitro on protein-coated titanium disks using a flow chamber combined with a static biofilm growth model. Subsequently, the biofilm-coated disks were exposed to four different powered toothbrushes (A, B, C, D). The parameters distance (0 and 1 mm), brushing time (2, 4, and 6 s), interdental space width (1, 2, and 3 mm), and toothbrush angulation (45° and 90°) were tested. The biofilm volumes were determined using volumetric analyses with confocal laser scanning microscope (Zeiss LSM700) images and Imaris version 7.7.2 software. The median percentages of simulated interdental biofilm reduction by the tested toothbrushes ranged from 7 to 64 %. The abilities of the analyzed toothbrushes to reduce the in vitro biofilm differed significantly (p < 0.05). Three of the tested toothbrushes (A, B, C) were able to significantly reduce a simulated interdental biofilm by noncontact brushing (p ≤ 0.005). The brushing parameters and their combinations tested in the experiments revealed only minor effects on in vitro interdental biofilm reduction (p > 0.05). A three-species in vitro biofilm could be altered by noncontact brushing with toothbrushes A, B, and C in an artificial interdental space model. Certain side-to-side toothbrushes demonstrate in vitro a high efficacy in interdental biofilm removal without bristle-to-biofilm contact.
Parameter Trade Studies For Coherent Lidar Wind Measurements of Wind from Space
NASA Technical Reports Server (NTRS)
Kavaya, Michael J.; Frehlich, Rod G.
2007-01-01
The design of an orbiting wind profiling lidar requires selection of dozens of lidar, measurement scenario, and mission geometry parameters; in addition to prediction of atmospheric parameters. Typical mission designs do not include a thorough trade optimization of all of these parameters. We report here the integration of a recently published parameterization of coherent lidar wind velocity measurement performance with an orbiting coherent wind lidar computer simulation; and the use of these combined tools to perform some preliminary parameter trades. We use the 2006 NASA Global Wind Observing Sounder mission design as the starting point for the trades.
A Validation Study of Merging and Spacing Techniques in a NAS-Wide Simulation
NASA Technical Reports Server (NTRS)
Glaab, Patricia C.
2011-01-01
In November 2010, Intelligent Automation, Inc. (IAI) delivered an M&S software tool to that allows system level studies of the complex terminal airspace with the ACES simulation. The software was evaluated against current day arrivals in the Atlanta TRACON using Atlanta's Hartsfield-Jackson International Airport (KATL) arrival schedules. Results of this validation effort are presented describing data sets, traffic flow assumptions and techniques, and arrival rate comparisons between reported landings at Atlanta versus simulated arrivals using the same traffic sets in ACES equipped with M&S. Initial results showed the simulated system capacity to be significantly below arrival capacity seen at KATL. Data was gathered for Atlanta using commercial airport and flight tracking websites (like FlightAware.com), and analyzed to insure compatible techniques were used for result reporting and comparison. TFM operators for Atlanta were consulted for tuning final simulation parameters and for guidance in flow management techniques during high volume operations. Using these modified parameters and incorporating TFM guidance for efficiencies in flowing aircraft, arrival capacity for KATL was matched for the simulation. Following this validation effort, a sensitivity study was conducted to measure the impact of variations in system parameters on the Atlanta airport arrival capacity.
THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Habib, Salman; Biswas, Rahul
2016-04-01
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
The mira-titan universe. Precision predictions for dark energy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Bingham, Derek; Lawrence, Earl
2016-03-28
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
Validating the simulation of large-scale parallel applications using statistical characteristics
Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...
2016-03-01
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-01-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-12-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.
Naranjo, Ramon C.; Niswonger, Richard G.; Stone, Mark; Davis, Clinton; McKay, Alan
2012-01-01
We describe an approach for calibrating a two-dimensional (2-D) flow model of hyporheic exchange using observations of temperature and pressure to estimate hydraulic and thermal properties. A longitudinal 2-D heat and flow model was constructed for a riffle-pool sequence to simulate flow paths and flux rates for variable discharge conditions. A uniform random sampling approach was used to examine the solution space and identify optimal values at local and regional scales. We used a regional sensitivity analysis to examine the effects of parameter correlation and nonuniqueness commonly encountered in multidimensional modeling. The results from this study demonstrate the ability to estimate hydraulic and thermal parameters using measurements of temperature and pressure to simulate exchange and flow paths. Examination of the local parameter space provides the potential for refinement of zones that are used to represent sediment heterogeneity within the model. The results indicate vertical hydraulic conductivity was not identifiable solely using pressure observations; however, a distinct minimum was identified using temperature observations. The measured temperature and pressure and estimated vertical hydraulic conductivity values indicate the presence of a discontinuous low-permeability deposit that limits the vertical penetration of seepage beneath the riffle, whereas there is a much greater exchange where the low-permeability deposit is absent. Using both temperature and pressure to constrain the parameter estimation process provides the lowest overall root-mean-square error as compared to using solely temperature or pressure observations. This study demonstrates the benefits of combining continuous temperature and pressure for simulating hyporheic exchange and flow in a riffle-pool sequence. Copyright 2012 by the American Geophysical Union.
Aeolus End-To-End Simulator and Wind Retrieval Algorithms up to Level 1B
NASA Astrophysics Data System (ADS)
Reitebuch, Oliver; Marksteiner, Uwe; Rompel, Marc; Meringer, Markus; Schmidt, Karsten; Huber, Dorit; Nikolaus, Ines; Dabas, Alain; Marshall, Jonathan; de Bruin, Frank; Kanitz, Thomas; Straume, Anne-Grete
2018-04-01
The first wind lidar in space ALADIN will be deployed on ESÁs Aeolus mission. In order to assess the performance of ALADIN and to optimize the wind retrieval and calibration algorithms an end-to-end simulator was developed. This allows realistic simulations of data downlinked by Aeolus. Together with operational processors this setup is used to assess random and systematic error sources and perform sensitivity studies about the influence of atmospheric and instrument parameters.
Real-time 3-D space numerical shake prediction for earthquake early warning
NASA Astrophysics Data System (ADS)
Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang
2017-12-01
In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
HIGH-RESOLUTION DATASET OF URBAN CANOPY PARAMETERS FOR HOUSTON, TEXAS
Urban dispersion and air quality simulation models applied at various horizontal scales require different levels of fidelity for specifying the characteristics of the underlying surfaces. As the modeling scales approach the neighborhood level (~1 km horizontal grid spacing), the...
Redshift-space distortions with the halo occupation distribution - II. Analytic model
NASA Astrophysics Data System (ADS)
Tinker, Jeremy L.
2007-01-01
We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.
The Simpsons program 6-D phase space tracking with acceleration
NASA Astrophysics Data System (ADS)
Machida, S.
1993-12-01
A particle tracking code, Simpsons, in 6-D phase space including energy ramping has been developed to model proton synchrotrons and storage rings. We take time as the independent variable to change machine parameters and diagnose beam quality in a quite similar way as real machines, unlike existing tracking codes for synchrotrons which advance a particle element by element. Arbitrary energy ramping and rf voltage curves as a function of time are read as an input file for defining a machine cycle. The code is used to study beam dynamics with time dependent parameters. Some of the examples from simulations of the Superconducting Super Collider (SSC) boosters are shown.
Belosi, Maria F; Rodriguez, Miguel; Fogliata, Antonella; Cozzi, Luca; Sempau, Josep; Clivio, Alessandro; Nicolini, Giorgia; Vanetti, Eugenio; Krauss, Harald; Khamphan, Catherine; Fenoglietto, Pascal; Puxeu, Josep; Fedele, David; Mancosu, Pietro; Brualla, Lorenzo
2014-05-01
Phase-space files for Monte Carlo simulation of the Varian TrueBeam beams have been made available by Varian. The aim of this study is to evaluate the accuracy of the distributed phase-space files for flattening filter free (FFF) beams, against experimental measurements from ten TrueBeam Linacs. The phase-space files have been used as input in PRIMO, a recently released Monte Carlo program based on the PENELOPE code. Simulations of 6 and 10 MV FFF were computed in a virtual water phantom for field sizes 3 × 3, 6 × 6, and 10 × 10 cm(2) using 1 × 1 × 1 mm(3) voxels and for 20 × 20 and 40 × 40 cm(2) with 2 × 2 × 2 mm(3) voxels. The particles contained in the initial phase-space files were transported downstream to a plane just above the phantom surface, where a subsequent phase-space file was tallied. Particles were transported downstream this second phase-space file to the water phantom. Experimental data consisted of depth doses and profiles at five different depths acquired at SSD = 100 cm (seven datasets) and SSD = 90 cm (three datasets). Simulations and experimental data were compared in terms of dose difference. Gamma analysis was also performed using 1%, 1 mm and 2%, 2 mm criteria of dose-difference and distance-to-agreement, respectively. Additionally, the parameters characterizing the dose profiles of unflattened beams were evaluated for both measurements and simulations. Analysis of depth dose curves showed that dose differences increased with increasing field size and depth; this effect might be partly motivated due to an underestimation of the primary beam energy used to compute the phase-space files. Average dose differences reached 1% for the largest field size. Lateral profiles presented dose differences well within 1% for fields up to 20 × 20 cm(2), while the discrepancy increased toward 2% in the 40 × 40 cm(2) cases. Gamma analysis resulted in an agreement of 100% when a 2%, 2 mm criterion was used, with the only exception of the 40 × 40 cm(2) field (∼95% agreement). With the more stringent criteria of 1%, 1 mm, the agreement reduced to almost 95% for field sizes up to 10 × 10 cm(2), worse for larger fields. Unflatness and slope FFF-specific parameters are in line with the possible energy underestimation of the simulated results relative to experimental data. The agreement between Monte Carlo simulations and experimental data proved that the evaluated Varian phase-space files for FFF beams from TrueBeam can be used as radiation sources for accurate Monte Carlo dose estimation, especially for field sizes up to 10 × 10 cm(2), that is the range of field sizes mostly used in combination to the FFF, high dose rate beams.
Towards physics responsible for large-scale Lyman-α forest bias parameters
Agnieszka M. Cieplak; Slosar, Anze
2016-03-08
Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density (b δ) and velocity gradient (b η) biases of the Lyman-α forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal broadening and hydrodynamically simulated baryon fields, we investigate how approximations found in the literature fare. We find that Seljak's 2012 analytical formulae for these bias parameters work surprisingly well in the limit ofmore » no thermal broadening and linear redshift-space distortions. We also show that his b η formula is exact in the limit of no thermal broadening. Since introduction of thermal broadening significantly affects its value, we speculate that a combination of large-scale measurements of b η and the small scale flux PDF might be a sensitive probe of the thermal state of the IGM. Lastly, we find that large-scale biases derived from the smoothed total matter field are within 10–20% to those based on hydrodynamical quantities, in line with other measurements in the literature.« less
Towards physics responsible for large-scale Lyman-α forest bias parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnieszka M. Cieplak; Slosar, Anze
Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density (b δ) and velocity gradient (b η) biases of the Lyman-α forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal broadening and hydrodynamically simulated baryon fields, we investigate how approximations found in the literature fare. We find that Seljak's 2012 analytical formulae for these bias parameters work surprisingly well in the limit ofmore » no thermal broadening and linear redshift-space distortions. We also show that his b η formula is exact in the limit of no thermal broadening. Since introduction of thermal broadening significantly affects its value, we speculate that a combination of large-scale measurements of b η and the small scale flux PDF might be a sensitive probe of the thermal state of the IGM. Lastly, we find that large-scale biases derived from the smoothed total matter field are within 10–20% to those based on hydrodynamical quantities, in line with other measurements in the literature.« less
The INAF/IAPS Plasma Chamber for ionospheric simulation experiment
NASA Astrophysics Data System (ADS)
Diego, Piero
2016-04-01
The plasma chamber is particularly suitable to perform studies for the following applications: - plasma compatibility and functional tests on payloads envisioned to operate in the ionosphere (e.g. sensors onboard satellites, exposed to the external plasma environment); - calibration/testing of plasma diagnostic sensors; - characterization and compatibility tests on components for space applications (e.g. optical elements, harness, satellite paints, photo-voltaic cells, etc.); - experiments on satellite charging in a space plasma environment; - tests on active experiments which use ion, electron or plasma sources (ion thrusters, hollow cathodes, field effect emitters, plasma contactors, etc.); - possible studies relevant to fundamental space plasma physics. The facility consists of a large volume vacuum tank (a cylinder of length 4.5 m and diameter 1.7 m) equipped with a Kaufman type plasma source, operating with Argon gas, capable to generate a plasma beam with parameters (i.e. density and electron temperature) close to the values encountered in the ionosphere at F layer altitudes. The plasma beam (A+ ions and electrons) is accelerated into the chamber at a velocity that reproduces the relative motion between an orbiting satellite and the ionosphere (≈ 8 km/s). This feature, in particular, allows laboratory simulations of the actual compression and depletion phenomena which take place in the ram and wake regions around satellites moving through the ionosphere. The reproduced plasma environment is monitored using Langmuir Probes (LP) and Retarding Potential Analyzers (RPA). These sensors can be automatically moved within the experimental space using a sled mechanism. Such a feature allows the acquisition of the plasma parameters all around the space payload installed into the chamber for testing. The facility is currently in use to test the payloads of CSES satellite (Chinese Seismic Electromagnetic Satellite) devoted to plasma parameters and electric field measurements in a polar orbit at 500 km altitude.
ACES: Space shuttle flight software analysis expert system
NASA Technical Reports Server (NTRS)
Satterwhite, R. Scott
1990-01-01
The Analysis Criteria Evaluation System (ACES) is a knowledge based expert system that automates the final certification of the Space Shuttle onboard flight software. Guidance, navigation and control of the Space Shuttle through all its flight phases are accomplished by a complex onboard flight software system. This software is reconfigured for each flight to allow thousands of mission-specific parameters to be introduced and must therefore be thoroughly certified prior to each flight. This certification is performed in ground simulations by executing the software in the flight computers. Flight trajectories from liftoff to landing, including abort scenarios, are simulated and the results are stored for analysis. The current methodology of performing this analysis is repetitive and requires many man-hours. The ultimate goals of ACES are to capture the knowledge of the current experts and improve the quality and reduce the manpower required to certify the Space Shuttle onboard flight software.
NASA Astrophysics Data System (ADS)
Toutin, Thierry; Wang, Huili; Charbonneau, Francois; Schmitt, Carla
2013-08-01
This paper presented two methods for the orthorectification of full/compact polarimetric SAR data: the polarimetric processing is performed in the image space (scientist's idealism) or in the ground space (user's realism) before or after the geometric processing, respectively. Radarsat-2 (R2) fine-quad and simulated very high-resolution RCM data acquired with different look angles over a hilly relief study site were processed using accurate lidar digital surface model. Quantitative evaluations between the two methods as a function of different geometric and radiometric parameters were performed to evaluate the impact during the orthorectification. The results demonstrated that the ground-space method can be safely applied to polarimetric R2 SAR data with an exception with the steep look angles and steep terrain slopes. On the other hand, the ground-space method cannot be applied to simulated compact RCM data due to 17dB noise floor and oversampling.
Dynamics in multiple-well Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Nigro, M.; Capuzzi, P.; Cataldo, H. M.; Jezek, D. M.
2018-01-01
We study the dynamics of three-dimensional weakly linked Bose-Einstein condensates using a multimode model with an effective interaction parameter. The system is confined by a ring-shaped four-well trapping potential. By constructing a two-mode Hamiltonian in a reduced highly symmetric phase space, we examine the periodic orbits and calculate their time periods both in the self-trapping and Josephson regimes. The dynamics in the vicinity of the reduced phase space is investigated by means of a Floquet multiplier analysis, finding regions of different linear stability and analyzing their implications on the exact dynamics. The numerical exploration in an extended region of the phase space demonstrates that two-mode tools can also be useful for performing a partition of the space in different regimes. Comparisons with Gross-Pitaevskii simulations confirm these findings and emphasize the importance of properly determining the effective on-site interaction parameter governing the multimode dynamics.
NASA Astrophysics Data System (ADS)
Salinas, J. L.; Nester, T.; Komma, J.; Bloeschl, G.
2017-12-01
Generation of realistic synthetic spatial rainfall is of pivotal importance for assessing regional hydroclimatic hazard as the input for long term rainfall-runoff simulations. The correct reproduction of observed rainfall characteristics, such as regional intensity-duration-frequency curves, and spatial and temporal correlations is necessary to adequately model the magnitude and frequency of the flood peaks, by reproducing antecedent soil moisture conditions before extreme rainfall events, and joint probability of flood waves at confluences. In this work, a modification of the model presented by Bardossy and Platte (1992), where precipitation is first modeled on a station basis as a multivariate autoregressive model (mAr) in a Normal space. The spatial and temporal correlation structures are imposed in the Normal space, allowing for a different temporal autocorrelation parameter for each station, and simultaneously ensuring the positive-definiteness of the correlation matrix of the mAr errors. The Normal rainfall is then transformed to a Gamma-distributed space, with parameters varying monthly according to a sinusoidal function, in order to adapt to the observed rainfall seasonality. One of the main differences with the original model is the simulation time-step, reduced from 24h to 6h. Due to a larger availability of daily rainfall data, as opposite to sub-daily (e.g. hourly), the parameters of the Gamma distributions are calibrated to reproduce simultaneously a series of daily rainfall characteristics (mean daily rainfall, standard deviations of daily rainfall, and 24h intensity-duration-frequency [IDF] curves), as well as other aggregated rainfall measures (mean annual rainfall, and monthly rainfall). The calibration of the spatial and temporal correlation parameters is performed in a way that the catchment-averaged IDF curves aggregated at different temporal scales fit the measured ones. The rainfall model is used to generate 10.000 years of synthetic precipitation, fed into a rainfall-runoff model to derive the flood frequency in the Tirolean Alps in Austria. Given the number of generated events, the simulation framework is able to generate a large variety of rainfall patterns, as well as reproduce the variograms of relevant extreme rainfall events in the region of interest.
NASA Technical Reports Server (NTRS)
Kuhn, A. E.
1975-01-01
A dispersion analysis considering 3 sigma uncertainties (or perturbations) in platform, vehicle, and environmental parameters was performed for the baseline reference mission (BRM) 1 of the space shuttle orbiter. The dispersion analysis is based on the nominal trajectory for the BRM 1. State vector and performance dispersions (or variations) which result from the indicated 3 sigma uncertainties were studied. The dispersions were determined at major mission events and fixed times from lift-off (time slices) and the results will be used to evaluate the capability of the vehicle to perform the mission within a 3 sigma level of confidence and to determine flight performance reserves. A computer program is given that was used for dynamic flight simulations of the space shuttle orbiter.
On the Occurrence of Thermal Nonequilibrium in Coronal Loops
NASA Astrophysics Data System (ADS)
Froment, C.; Auchère, F.; Mikić, Z.; Aulanier, G.; Bocchialini, K.; Buchlin, E.; Solomon, J.; Soubrié, E.
2018-03-01
Long-period EUV pulsations, recently discovered to be common in active regions, are understood to be the coronal manifestation of thermal nonequilibrium (TNE). The active regions previously studied with EIT/Solar and Heliospheric Observatory and AIA/SDO indicated that long-period intensity pulsations are localized in only one or two loop bundles. The basic idea of this study is to understand why. For this purpose, we tested the response of different loop systems, using different magnetic configurations, to different stratifications and strengths of the heating. We present an extensive parameter-space study using 1D hydrodynamic simulations (1020 in total) and conclude that the occurrence of TNE requires specific combinations of parameters. Our study shows that the TNE cycles are confined to specific ranges in parameter space. This naturally explains why only some loops undergo constant periodic pulsations over several days: since the loop geometry and the heating properties generally vary from one loop to another in an active region, only the ones in which these parameters are compatible exhibit TNE cycles. Furthermore, these parameters (heating and geometry) are likely to vary significantly over the duration of a cycle, which potentially limits the possibilities of periodic behavior. This study also confirms that long-period intensity pulsations and coronal rain are two aspects of the same phenomenon: both phenomena can occur for similar heating conditions and can appear simultaneously in the simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tocchini-Valentini, Domenico; Barnard, Michael; Bennett, Charles L.
2012-10-01
We present a method to extract the redshift-space distortion {beta} parameter in configuration space with a minimal set of cosmological assumptions. We show that a novel combination of the observed monopole and quadrupole correlation functions can remove efficiently the impact of mild nonlinearities and redshift errors. The method offers a series of convenient properties: it does not depend on the theoretical linear correlation function, the mean galaxy density is irrelevant, only convolutions are used, and there is no explicit dependence on linear bias. Analyses based on dark matter N-body simulations and Fisher matrix demonstrate that errors of a few percentmore » on {beta} are possible with a full-sky, 1 (h {sup -1} Gpc){sup 3} survey centered at a redshift of unity and with negligible shot noise. We also find a baryonic feature in the normalized quadrupole in configuration space that should complicate the extraction of the growth parameter from the linear theory asymptote, but that does not have a major impact on our method.« less
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
Theory and simulation of backbombardment in single-cell thermionic-cathode electron guns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, J. P.; Biedron, S. G.; Harris, J. R.
This paper presents a comparison between simulation results and a first principles analytical model of electron back-bombardment developed at Colorado State University for single-cell, thermionic-cathode rf guns. While most previous work on back-bombardment has been specific to particular accelerator systems, this work is generalized to a wide variety of guns within the applicable parameter space. The merits and limits of the analytic model will be discussed. This paper identifies the three fundamental parameters that drive the back-bombardment process, and demonstrates relative accuracy in calculating the predicted back-bombardment power of a single-cell thermionic gun.
Theory and simulation of backbombardment in single-cell thermionic-cathode electron guns
Edelen, J. P.; Biedron, S. G.; Harris, J. R.; ...
2015-04-01
This paper presents a comparison between simulation results and a first principles analytical model of electron back-bombardment developed at Colorado State University for single-cell, thermionic-cathode rf guns. While most previous work on back-bombardment has been specific to particular accelerator systems, this work is generalized to a wide variety of guns within the applicable parameter space. The merits and limits of the analytic model will be discussed. This paper identifies the three fundamental parameters that drive the back-bombardment process, and demonstrates relative accuracy in calculating the predicted back-bombardment power of a single-cell thermionic gun.
Equilibration of experimentally determined protein structures for molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Walton, Emily B.; Vanvliet, Krystyn J.
2006-12-01
Preceding molecular dynamics simulations of biomolecular interactions, the molecule of interest is often equilibrated with respect to an initial configuration. This so-called equilibration stage is required because the input structure is typically not within the equilibrium phase space of the simulation conditions, particularly in systems as complex as proteins, which can lead to artifactual trajectories of protein dynamics. The time at which nonequilibrium effects from the initial configuration are minimized—what we will call the equilibration time—marks the beginning of equilibrium phase-space exploration. Note that the identification of this time does not imply exploration of the entire equilibrium phase space. We have found that current equilibration methodologies contain ambiguities that lead to uncertainty in determining the end of the equilibration stage of the trajectory. This results in equilibration times that are either too long, resulting in wasted computational resources, or too short, resulting in the simulation of molecular trajectories that do not accurately represent the physical system. We outline and demonstrate a protocol for identifying the equilibration time that is based on the physical model of Normal Mode Analysis. We attain the computational efficiency required of large-protein simulations via a stretched exponential approximation that enables an analytically tractable and physically meaningful form of the root-mean-square deviation of atoms comprising the protein. We find that the fitting parameters (which correspond to physical properties of the protein) fluctuate initially but then stabilize for increased simulation time, independently of the simulation duration or sampling frequency. We define the end of the equilibration stage—and thus the equilibration time—as the point in the simulation when these parameters attain constant values. Compared to existing methods, our approach provides the objective identification of the time at which the simulated biomolecule has entered an energetic basin. For the representative protein considered, bovine pancreatic trypsin inhibitor, existing methods indicate a range of 0.2-10ns of simulation until a local minimum is attained. Our approach identifies a substantially narrower range of 4.5-5.5ns , which will lead to a much more objective choice of equilibration time.
Development of Models for High Precision Simulation of the Space Mission Microscope
NASA Astrophysics Data System (ADS)
Bremer, Stefanie; List, Meike; Selig, Hanns; Lämmerzahl, Claus
MICROSCOPE is a French space mission for testing the Weak Equivalence Principle (WEP). The mission goal is the determination of the Eötvös parameter with an accuracy of 10-15. This will be achieved by means of two high-precision capacitive differential accelerometers, that are built by the French institute ONERA. At the German institute ZARM drop tower tests are carried out to verify the payload performance. Additionally, the mission data evaluation is prepared in close cooperation with the French partners CNES, ONERA and OCA. Therefore a comprehensive simulation of the real system including the science signal and all error sources is built for the development and testing of data reduction and data analysis algorithms to extract the WEP violation signal. Currently, the High Performance Satellite Dynamics Simulator (HPS), a cooperation project of ZARM and the DLR Institute of Space Systems, is adapted to the MICROSCOPE mission for the simulation of test mass and satellite dynamics. Models of environmental disturbances like solar radiation pressure are considered, too. Furthermore detailed modeling of the on-board capacitive sensors is done.
Damage Progression in Bolted Composites
NASA Technical Reports Server (NTRS)
Minnetyan, Levon; Chamis, Christos C.; Gotsis, Pascal K.
1998-01-01
Structural durability, damage tolerance, and progressive fracture characteristics of bolted graphite/epoxy composite laminates are evaluated via computational simulation. Constituent material properties and stress and strain limits are scaled up to the structure level to evaluate the overall damage and fracture propagation for bolted composites. Single and double bolted composite specimens with various widths and bolt spacings are evaluated. The effect of bolt spacing is investigated with regard to the structural durability of a bolted joint. Damage initiation, growth, accumulation, and propagation to fracture are included in the simulations. Results show the damage progression sequence and structural fracture resistance during different degradation stages. A procedure is outlined for the use of computational simulation data in the assessment of damage tolerance, determination of sensitive parameters affecting fracture, and interpretation of experimental results with insight for design decisions.
Damage Progression in Bolted Composites
NASA Technical Reports Server (NTRS)
Minnetyan, Levon; Chamis, Christos; Gotsis, Pascal K.
1998-01-01
Structural durability,damage tolerance,and progressive fracture characteristics of bolted graphite/epoxy composite laminates are evaluated via computational simulation. Constituent material properties and stress and strain limits are scaled up to the structure level to evaluate the overall damage and fracture propagation for bolted composites. Single and double bolted composite specimens with various widths and bolt spacings are evaluated. The effect of bolt spacing is investigated with regard to the structural durability of a bolted joint. Damage initiation, growth, accumulation, and propagation to fracture are included in the simulations. Results show the damage progression sequence and structural fracture resistance during different degradation stages. A procedure is outlined for the use of computational simulation data in the assessment of damage tolerance, determination of sensitive parameters affecting fracture, and interpretation of experimental results with insight for design decisions.
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.; Brucker, G. J.; Calvel, P.; Baiget, A.; Peyrotte, C.; Gaillard, R.
1992-01-01
The transport, energy loss, and charge production of heavy ions in the sensitive regions of IRF 150 power MOSFETs are described. The dependence and variation of transport parameters with ion type and energy relative to the requirements for single event burnout in this part type are discussed. Test data taken with this power MOSFET are used together with analyses by means of a computer code of the ion energy loss and charge production in the device to establish criteria for burnout and parameters for space predictions. These parameters are then used in an application to predict burnout rates in a geostationary orbit for power converters operating in a dynamic mode. Comparisons of rates for different geometries in simulating SEU (single event upset) sensitive volumes are presented.
Redshift Space Distortion on the Small Scale Clustering of Structure
NASA Astrophysics Data System (ADS)
Park, Hyunbae; Sabiu, Cristiano; Li, Xiao-dong; Park, Changbom; Kim, Juhan
2018-01-01
The positions of galaxies in comoving Cartesian space varies under different cosmological parameter choices, inducing a redshift-dependent scaling in the galaxy distribution. The shape of the two-point correlation of galaxies exhibits a significant redshift evolution when the galaxy sample is analyzed under a cosmology differing from the true, simulated one. In our previous works, we can made use of this geometrical distortion to constrain the values of cosmological parameters governing the expansion history of the universe. This current work is a continuation of our previous works as a strategy to constrain cosmological parameters using redshift-invariant physical quantities. We now aim to understand the redshift evolution of the full shape of the small scale, anisotropic galaxy clustering and give a firmer theoretical footing to our previous works.
Parameter Space of the Columbia River Estuarine Turbidity Maxima
NASA Astrophysics Data System (ADS)
McNeil, C. L.; Shcherbina, A.; Lopez, J.; Karna, T.; Baptista, A. M.; Crump, B. C.; Sanford, T. B.
2016-12-01
We present observations of estuarine turbidity maxima (ETM) in the North Channel of the Columbia River estuary (OR and WA, USA) covering different river discharge and flood tide conditions. Measurements were made using optical backscattering sensors on two REMUS-100 autonomous underwater vehicles (AUVs) during spring 2012, summer 2013, and fall 2012. Although significant short term variability in AUV measured optical backscatter was observed, some clustering of the data occurs around the estuarine regimes defined by a mixing parameter and a freshwater Froude number (Geyer & MacCready [2014]). Similar clustering is observed in long term time series of turbidity from the SATURN observatory. We will use available measurements and numerical model simulations of suspended sediment to further explore the variability of suspended sediment dynamics within a frame work of estuarine parameter space.
Performance analysis of wideband data and television channels. [space shuttle communications
NASA Technical Reports Server (NTRS)
Geist, J. M.
1975-01-01
Several aspects are discussed of space shuttle communications, including the return link (shuttle-to-ground) relayed through a satellite repeater (TDRS). The repeater exhibits nonlinear amplification and an amplitude-dependent phase shift. Models were developed for various link configurations, and computer simulation programs based on these models are described. Certain analytical results on system performance were also obtained. For the system parameters assumed, the results indicate approximately 1 db degradation relative to a link employing a linear repeater. While this degradation is dependent upon the repeater, filter bandwidths, and modulation parameters used, the programs can accommodate changes to any of these quantities. Thus the programs can be applied to determine the performance with any given set of parameters, or used as an aid in link design.
Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-01-01
Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation. PMID:26150807
Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-01-01
Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation.
Interactive model evaluation tool based on IPython notebook
NASA Astrophysics Data System (ADS)
Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet
2015-04-01
In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).
NASA Technical Reports Server (NTRS)
Witzberger, Kevin (Inventor); Hojnicki, Jeffery (Inventor); Manzella, David (Inventor)
2016-01-01
Modeling and control software that integrates the complexities of solar array models, a space environment, and an electric propulsion system into a rigid body vehicle simulation and control model is provided. A rigid body vehicle simulation of a solar electric propulsion (SEP) vehicle may be created using at least one solar array model, at least one model of a space environment, and at least one model of a SEP propulsion system. Power availability and thrust profiles may be determined based on the rigid body vehicle simulation as the SEP vehicle transitions from a low Earth orbit (LEO) to a higher orbit or trajectory. The power availability and thrust profiles may be displayed such that a user can use the displayed power availability and thrust profiles to determine design parameters for an SEP vehicle mission.
Brightness analysis of an electron beam with a complex profile
NASA Astrophysics Data System (ADS)
Maesaka, Hirokazu; Hara, Toru; Togawa, Kazuaki; Inagaki, Takahiro; Tanaka, Hitoshi
2018-05-01
We propose a novel analysis method to obtain the core bright part of an electron beam with a complex phase-space profile. This method is beneficial to evaluate the performance of simulation data of a linear accelerator (linac), such as an x-ray free electron laser (XFEL) machine, since the phase-space distribution of a linac electron beam is not simple, compared to a Gaussian beam in a synchrotron. In this analysis, the brightness of undulator radiation is calculated and the core of an electron beam is determined by maximizing the brightness. We successfully extracted core electrons from a complex beam profile of XFEL simulation data, which was not expressed by a set of slice parameters. FEL simulations showed that the FEL intensity was well remained even after extracting the core part. Consequently, the FEL performance can be estimated by this analysis without time-consuming FEL simulations.
Multiscale stochastic simulations for tensile testing of nanotube-based macroscopic cables.
Pugno, Nicola M; Bosia, Federico; Carpinteri, Alberto
2008-08-01
Thousands of multiscale stochastic simulations are carried out in order to perform the first in-silico tensile tests of carbon nanotube (CNT)-based macroscopic cables with varying length. The longest treated cable is the space-elevator megacable but more realistic shorter cables are also considered in this bottom-up investigation. Different sizes, shapes, and concentrations of defects are simulated, resulting in cable macrostrengths not larger than approximately 10 GPa, which is much smaller than the theoretical nanotube strength (approximately 100 GPa). No best-fit parameters are present in the multiscale simulations: the input at level 1 is directly estimated from nanotensile tests of CNTs, whereas its output is considered as the input for the level 2, and so on up to level 5, corresponding to the megacable. Thus, five hierarchical levels are used to span lengths from that of a single nanotube (approximately 100 nm) to that of the space-elevator megacable (approximately 100 Mm).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowder, Jeff; Cornish, Neil J.; Reddinger, J. Lucas
This work presents the first application of the method of genetic algorithms (GAs) to data analysis for the Laser Interferometer Space Antenna (LISA). In the low frequency regime of the LISA band there are expected to be tens of thousands of galactic binary systems that will be emitting gravitational waves detectable by LISA. The challenge of parameter extraction of such a large number of sources in the LISA data stream requires a search method that can efficiently explore the large parameter spaces involved. As signals of many of these sources will overlap, a global search method is desired. GAs representmore » such a global search method for parameter extraction of multiple overlapping sources in the LISA data stream. We find that GAs are able to correctly extract source parameters for overlapping sources. Several optimizations of a basic GA are presented with results derived from applications of the GA searches to simulated LISA data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, Edward K.; Cornish, Neil J.
Massive black hole binaries are key targets for the space based gravitational wave Laser Interferometer Space Antenna (LISA). Several studies have investigated how LISA observations could be used to constrain the parameters of these systems. Until recently, most of these studies have ignored the higher harmonic corrections to the waveforms. Here we analyze the effects of the higher harmonics in more detail by performing extensive Monte Carlo simulations. We pay particular attention to how the higher harmonics impact parameter correlations, and show that the additional harmonics help mitigate the impact of having two laser links fail, by allowing for anmore » instantaneous measurement of the gravitational wave polarization with a single interferometer channel. By looking at parameter correlations we are able to explain why certain mass ratios provide dramatic improvements in certain parameter estimations, and illustrate how the improved polarization measurement improves the prospects for single interferometer operation.« less
NASA Astrophysics Data System (ADS)
Pasquato, Mario; Chung, Chul
2016-05-01
Context. Machine-learning (ML) solves problems by learning patterns from data with limited or no human guidance. In astronomy, ML is mainly applied to large observational datasets, e.g. for morphological galaxy classification. Aims: We apply ML to gravitational N-body simulations of star clusters that are either formed by merging two progenitors or evolved in isolation, planning to later identify globular clusters (GCs) that may have a history of merging from observational data. Methods: We create mock-observations from simulated GCs, from which we measure a set of parameters (also called features in the machine-learning field). After carrying out dimensionality reduction on the feature space, the resulting datapoints are fed in to various classification algorithms. Using repeated random subsampling validation, we check whether the groups identified by the algorithms correspond to the underlying physical distinction between mergers and monolithically evolved simulations. Results: The three algorithms we considered (C5.0 trees, k-nearest neighbour, and support-vector machines) all achieve a test misclassification rate of about 10% without parameter tuning, with support-vector machines slightly outperforming the others. The first principal component of feature space correlates with cluster concentration. If we exclude it from the regression, the performance of the algorithms is only slightly reduced.
Zheng, Lianqing; Chen, Mengen; Yang, Wei
2009-06-21
To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.
NASA Technical Reports Server (NTRS)
Kavaya, Michael J.
2008-01-01
Over 20 years of investigation by NASA and NOAA scientists and Doppler lidar technologists into a global wind profiling mission from earth orbit have led to the current favored concept of an instrument with both coherent- and direct-detection pulsed Doppler lidars (i.e., a hybrid Doppler lidar) and a stepstare beam scanning approach covering several azimuth angles with a fixed nadir angle. The nominal lidar wavelengths are 2 microns for coherent detection, and 0.355 microns for direct detection. The two agencies have also generated two sets of sophisticated wind measurement requirements for a space mission: science demonstration requirements and operational requirements. The requirements contain the necessary details to permit mission design and optimization by lidar technologists. Simulations have been developed that connect the science requirements to the wind measurement requirements, and that connect the wind measurement requirements to the Doppler lidar parameters. The simulations also permit trade studies within the multi-parameter space. These tools, combined with knowledge of the state of the Doppler lidar technology, have been used to conduct space instrument and mission design activities to validate the feasibility of the chosen mission and lidar parameters. Recently, the NRC Earth Science Decadal Survey recommended the wind mission to NASA as one of 15 recommended missions. A full description of the wind measurement product from these notional missions and the possible trades available are presented in this paper.
26th Space Simulation Conference Proceedings. Environmental Testing: The Path Forward
NASA Technical Reports Server (NTRS)
Packard, Edward A.
2010-01-01
Topics covered include: A Multifunctional Space Environment Simulation Facility for Accelerated Spacecraft Materials Testing; Exposure of Spacecraft Surface Coatings in a Simulated GEO Radiation Environment; Gravity-Offloading System for Large-Displacement Ground Testing of Spacecraft Mechanisms; Microscopic Shutters Controlled by cRIO in Sounding Rocket; Application of a Physics-Based Stabilization Criterion to Flight System Thermal Testing; Upgrade of a Thermal Vacuum Chamber for 20 Kelvin Operations; A New Approach to Improve the Uniformity of Solar Simulator; A Perfect Space Simulation Storm; A Planetary Environmental Simulator/Test Facility; Collimation Mirror Segment Refurbishment inside ESA s Large Space; Space Simulation of the CBERS 3 and 4 Satellite Thermal Model in the New Brazilian 6x8m Thermal Vacuum Chamber; The Certification of Environmental Chambers for Testing Flight Hardware; Space Systems Environmental Test Facility Database (SSETFD), Website Development Status; Wallops Flight Facility: Current and Future Test Capabilities for Suborbital and Orbital Projects; Force Limited Vibration Testing of JWST NIRSpec Instrument Using Strain Gages; Investigation of Acoustic Field Uniformity in Direct Field Acoustic Testing; Recent Developments in Direct Field Acoustic Testing; Assembly, Integration and Test Centre in Malaysia: Integration between Building Construction Works and Equipment Installation; Complex Ground Support Equipment for Satellite Thermal Vacuum Test; Effect of Charging Electron Exposure on 1064nm Transmission through Bare Sapphire Optics and SiO2 over HfO2 AR-Coated Sapphire Optics; Environmental Testing Activities and Capabilities for Turkish Space Industry; Integrated Circuit Reliability Simulation in Space Environments; Micrometeoroid Impacts and Optical Scatter in Space Environment; Overcoming Unintended Consequences of Ambient Pressure Thermal Cycling Environmental Tests; Performance and Functionality Improvements to Next Generation Thermal Vacuum Control System; Robotic Lunar Lander Development Project: Three-Dimensional Dynamic Stability Testing and Analysis; Thermal Physical Properties of Thermal Coatings for Spacecraft in Wide Range of Environmental Conditions: Experimental and Theoretical Study; Molecular Contamination Generated in Thermal Vacuum Chambers; Preventing Cross Contamination of Hardware in Thermal Vacuum Chambers; Towards Validation of Particulate Transport Code; Updated Trends in Materials' Outgassing Technology; Electrical Power and Data Acquisition Setup for the CBER 3 and 4 Satellite TBT; Method of Obtaining High Resolution Intrinsic Wire Boom Damping Parameters for Multi-Body Dynamics Simulations; and Thermal Vacuum Testing with Scalable Software Developed In-House.
Conceptual Design of Tail-Research EXperiment (T-REX) on Space Plasma Environment Research Facility
NASA Astrophysics Data System (ADS)
Xiao, Qingmei; Wang, Xiaogang; E, Peng; Shen, Chao; Wang, Zhibin; Mao, Aohua; Xiao, Chijie; Ding, Weixing; Ji, Hantao; Ren, Yang
2016-10-01
Space Environment Simulation Research Infrastructure (SESRI), a scientific project for a major national facility of fundamental researches, has recently been launched at Harbin Institute of Technology (HIT). The Space Plasma Environment Research Facility (SPERF) for simulation of space plasma environment is one of the components of SESRI. It is designed to investigate fundamental issues in space plasma environment, such as energetic particles transportation and the interaction with waves in magnetosphere, magnetic reconnection at magnetopause and magnetotail, etc. Tail-Research Experiment (T-REX) is part of the SPERF for laboratory studies of space physics relevant to tail reconnection and dipolarization process. T-REX is designed to carry out two kinds of experiments: the tail plasmamoid for magnetic reconnection and magnetohydrodynamic waves excited by high speed plasma jet. In this presentation, the scientific goals and experimental plans for T-REX together with the means applied to generate the plasma with desired parameters are reviewed. Two typical scenarios of T-REX with operations of plasma sources and various magnetic configurations to study specific physical processes in space plasmas will also be presented.
NASA Astrophysics Data System (ADS)
Karmalkar, A.; Sexton, D.; Murphy, J.
2017-12-01
We present exploratory work towards developing an efficient strategy to select variants of a state-of-the-art but expensive climate model suitable for climate projection studies. The strategy combines information from a set of idealized perturbed parameter ensemble (PPE) and CMIP5 multi-model ensemble (MME) experiments, and uses two criteria as basis to select model variants for a PPE suitable for future projections: a) acceptable model performance at two different timescales, and b) maintaining diversity in model response to climate change. We demonstrate that there is a strong relationship between model errors at weather and climate timescales for a variety of key variables. This relationship is used to filter out parts of parameter space that do not give credible simulations of historical climate, while minimizing the impact on ranges in forcings and feedbacks that drive model responses to climate change. We use statistical emulation to explore the parameter space thoroughly, and demonstrate that about 90% can be filtered out without affecting diversity in global-scale climate change responses. This leads to identification of plausible parts of parameter space from which model variants can be selected for projection studies.
Helicopter time-domain electromagnetic numerical simulation based on Leapfrog ADI-FDTD
NASA Astrophysics Data System (ADS)
Guan, S.; Ji, Y.; Li, D.; Wu, Y.; Wang, A.
2017-12-01
We present a three-dimension (3D) Alternative Direction Implicit Finite-Difference Time-Domain (Leapfrog ADI-FDTD) method for the simulation of helicopter time-domain electromagnetic (HTEM) detection. This method is different from the traditional explicit FDTD, or ADI-FDTD. Comparing with the explicit FDTD, leapfrog ADI-FDTD algorithm is no longer limited by Courant-Friedrichs-Lewy(CFL) condition. Thus, the time step is longer. Comparing with the ADI-FDTD, we reduce the equations from 12 to 6 and .the Leapfrog ADI-FDTD method will be easier for the general simulation. First, we determine initial conditions which are adopted from the existing method presented by Wang and Tripp(1993). Second, we derive Maxwell equation using a new finite difference equation by Leapfrog ADI-FDTD method. The purpose is to eliminate sub-time step and retain unconditional stability characteristics. Third, we add the convolution perfectly matched layer (CPML) absorbing boundary condition into the leapfrog ADI-FDTD simulation and study the absorbing effect of different parameters. Different absorbing parameters will affect the absorbing ability. We find the suitable parameters after many numerical experiments. Fourth, We compare the response with the 1-Dnumerical result method for a homogeneous half-space to verify the correctness of our algorithm.When the model contains 107*107*53 grid points, the conductivity is 0.05S/m. The results show that Leapfrog ADI-FDTD need less simulation time and computer storage space, compared with ADI-FDTD. The calculation speed decreases nearly four times, memory occupation decreases about 32.53%. Thus, this algorithm is more efficient than the conventional ADI-FDTD method for HTEM detection, and is more precise than that of explicit FDTD in the late time.
CAPS Simulation Environment Development
NASA Technical Reports Server (NTRS)
Murphy, Douglas G.; Hoffman, James A.
2005-01-01
The final design for an effective Comet/Asteroid Protection System (CAPS) will likely come after a number of competing designs have been simulated and evaluated. Because of the large number of design parameters involved in a system capable of detecting an object, accurately determining its orbit, and diverting the impact threat, a comprehensive simulation environment will be an extremely valuable tool for the CAPS designers. A successful simulation/design tool will aid the user in identifying the critical parameters in the system and eventually allow for automatic optimization of the design once the relationships of the key parameters are understood. A CAPS configuration will consist of space-based detectors whose purpose is to scan the celestial sphere in search of objects likely to make a close approach to Earth and to determine with the greatest possible accuracy the orbits of those objects. Other components of a CAPS configuration may include systems for modifying the orbits of approaching objects, either for the purpose of preventing a collision or for positioning the object into an orbit where it can be studied or used as a mineral resource. The Synergistic Engineering Environment (SEE) is a space-systems design, evaluation, and visualization software tool being leveraged to simulate these aspects of the CAPS study. The long-term goal of the SEE is to provide capabilities to allow the user to build and compare various CAPS designs by running end-to-end simulations that encompass the scanning phase, the orbit determination phase, and the orbit modification phase of a given scenario. Herein, a brief description of the expected simulation phases is provided, the current status and available features of the SEE software system is reported, and examples are shown of how the system is used to build and evaluate a CAPS detection design. Conclusions and the roadmap for future development of the SEE are also presented.
Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Alder, J.; van Griensven, A.; Meixner, T.
2003-12-01
Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.
NASA Astrophysics Data System (ADS)
Jiménez-Forteza, Xisco; Keitel, David; Husa, Sascha; Hannam, Mark; Khan, Sebastian; Pürrer, Michael
2017-03-01
Numerical relativity is an essential tool in studying the coalescence of binary black holes (BBHs). It is still computationally prohibitive to cover the BBH parameter space exhaustively, making phenomenological fitting formulas for BBH waveforms and final-state properties important for practical applications. We describe a general hierarchical bottom-up fitting methodology to design and calibrate fits to numerical relativity simulations for the three-dimensional parameter space of quasicircular nonprecessing merging BBHs, spanned by mass ratio and by the individual spin components orthogonal to the orbital plane. Particular attention is paid to incorporating the extreme-mass-ratio limit and to the subdominant unequal-spin effects. As an illustration of the method, we provide two applications, to the final spin and final mass (or equivalently: radiated energy) of the remnant black hole. Fitting to 427 numerical relativity simulations, we obtain results broadly consistent with previously published fits, but improving in overall accuracy and particularly in the approach to extremal limits and for unequal-spin configurations. We also discuss the importance of data quality studies when combining simulations from diverse sources, how detailed error budgets will be necessary for further improvements of these already highly accurate fits, and how this first detailed study of unequal-spin effects helps in choosing the most informative parameters for future numerical relativity runs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Shijun, E-mail: sj-xie@163.com; State Key Laboratory of Control and Simulation of Power System and Generation Equipment, Department of Electrical Engineering, Tsinghua University, Beijing 100084; Zeng, Rong
2015-08-15
Natural lightning flashes are stochastic and uncontrollable, and thus, it is difficult to observe the formation process of a downward negative stepped leader (NSL) directly and in detail. This situation has led to some dispute over the actual NSL formation mechanism, and thus has hindered improvements in the lightning shielding analysis model. In this paper, on the basis of controllable long air gap discharge experiments, the formation conditions required for NSLs in negative flashes have been studied. First, a series of simulation experiments on varying scales were designed and carried out. The NSL formation processes were observed, and several ofmore » the characteristic process parameters, including the scale, the propagation velocity, and the dark period, were obtained. By comparing the acquired formation processes and the characteristic parameters with those in natural lightning flashes, the similarity between the NSLs in the simulation experiments and those in natural flashes was proved. Then, based on the local thermodynamic equation and the space charge estimation method, the required NSL formation conditions were deduced, and the space background electric field (E{sub b}) was proposed as the primary parameter for NSL formation. Finally, the critical value of E{sub b} required for the formation of NSLs in natural flashes was determined to be approximately 75 kV/m by extrapolation of the results of the simulation experiments.« less
Wind-tunnel based definition of the AFE aerothermodynamic environment. [Aeroassist Flight Experiment
NASA Technical Reports Server (NTRS)
Miller, Charles G.; Wells, W. L.
1992-01-01
The Aeroassist Flight Experiment (AFE), scheduled to be performed in 1994, will serve as a precursor for aeroassisted space transfer vehicles (ASTV's) and is representative of entry concepts being considered for missions to Mars. Rationale for the AFE is reviewed briefly as are the various experiments carried aboard the vehicle. The approach used to determine hypersonic aerodynamic and aerothermodynamic characteristics over a wide range of simulation parameters in ground-based facilities is presented. Facilities, instrumentation and test procedures employed in the establishment of the data base are discussed. Measurements illustrating the effects of hypersonic simulation parameters, particularly normal-shock density ratio (an important parameter for hypersonic blunt bodies), and attitude on aerodynamic and aerothermodynamic characteristics are presented, and predictions from computational fluid dynamic (CFD) computer codes are compared with measurement.
Reliability of analog quantum simulation
NASA Astrophysics Data System (ADS)
Sarovar, Mohan; Zhang, Jun; Zeng, Lishan
Analog quantum simulators (AQS) will likely be the first nontrivial application of quantum technology for predictive simulation. However, there remain questions regarding the degree of confidence that can be placed in the results of AQS since they do not naturally incorporate error correction. We formalize the notion of AQS reliability to calibration errors by determining sensitivity of AQS outputs to underlying parameters, and formulate conditions for robust simulation. Our approach connects to the notion of parameter space compression in statistical physics and naturally reveals the importance of model symmetries in dictating the robust properties. This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the United States Department of Energy's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.
1987-01-01
The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.
NASA Astrophysics Data System (ADS)
Xu, Wenfu; Hu, Zhonghua; Zhang, Yu; Liang, Bin
2017-03-01
After being launched into space to perform some tasks, the inertia parameters of a space robotic system may change due to fuel consumption, hardware reconfiguration, target capturing, and so on. For precision control and simulation, it is required to identify these parameters on orbit. This paper proposes an effective method for identifying the complete inertia parameters (including the mass, inertia tensor and center of mass position) of a space robotic system. The key to the method is to identify two types of simple dynamics systems: equivalent single-body and two-body systems. For the former, all of the joints are locked into a designed configuration and the thrusters are used for orbital maneuvering. The object function for optimization is defined in terms of acceleration and velocity of the equivalent single body. For the latter, only one joint is unlocked and driven to move along a planned (exiting) trajectory in free-floating mode. The object function is defined based on the linear and angular momentum equations. Then, the parameter identification problems are transformed into non-linear optimization problems. The Particle Swarm Optimization (PSO) algorithm is applied to determine the optimal parameters, i.e. the complete dynamic parameters of the two equivalent systems. By sequentially unlocking the 1st to nth joints (or unlocking the nth to 1st joints), the mass properties of body 0 to n (or n to 0) are completely identified. For the proposed method, only simple dynamics equations are needed for identification. The excitation motion (orbit maneuvering and joint motion) is also easily realized. Moreover, the method does not require prior knowledge of the mass properties of any body. It is general and practical for identifying a space robotic system on-orbit.
Ultracool dwarf benchmarks with Gaia primaries
NASA Astrophysics Data System (ADS)
Marocco, F.; Pinfield, D. J.; Cook, N. J.; Zapatero Osorio, M. R.; Montes, D.; Caballero, J. A.; Gálvez-Ortiz, M. C.; Gromadzki, M.; Jones, H. R. A.; Kurtev, R.; Smart, R. L.; Zhang, Z.; Cabrera Lavers, A. L.; García Álvarez, D.; Qi, Z. X.; Rickard, M. J.; Dover, L.
2017-10-01
We explore the potential of Gaia for the field of benchmark ultracool/brown dwarf companions, and present the results of an initial search for metal-rich/metal-poor systems. A simulated population of resolved ultracool dwarf companions to Gaia primary stars is generated and assessed. Of the order of ˜24 000 companions should be identifiable outside of the Galactic plane (|b| > 10 deg) with large-scale ground- and space-based surveys including late M, L, T and Y types. Our simulated companion parameter space covers 0.02 ≤ M/M⊙ ≤ 0.1, 0.1 ≤ age/Gyr ≤ 14 and -2.5 ≤ [Fe/H] ≤ 0.5, with systems required to have a false alarm probability <10-4, based on projected separation and expected constraints on common distance, common proper motion and/or common radial velocity. Within this bulk population, we identify smaller target subsets of rarer systems whose collective properties still span the full parameter space of the population, as well as systems containing primary stars that are good age calibrators. Our simulation analysis leads to a series of recommendations for candidate selection and observational follow-up that could identify ˜500 diverse Gaia benchmarks. As a test of the veracity of our methodology and simulations, our initial search uses UKIRT Infrared Deep Sky Survey and Sloan Digital Sky Survey to select secondaries, with the parameters of primaries taken from Tycho-2, Radial Velocity Experiment, Large sky Area Multi-Object fibre Spectroscopic Telescope and Tycho-Gaia Astrometric Solution. We identify and follow up 13 new benchmarks. These include M8-L2 companions, with metallicity constraints ranging in quality, but robust in the range -0.39 ≤ [Fe/H] ≤ +0.36, and with projected physical separation in the range 0.6 < s/kau < 76. Going forward, Gaia offers a very high yield of benchmark systems, from which diverse subsamples may be able to calibrate a range of foundational ultracool/sub-stellar theory and observation.
Risk Assessment of Bone Fracture During Space Exploration Missions to the Moon and Mars
NASA Technical Reports Server (NTRS)
Lewandowski, Beth E.; Myers, Jerry G.; Nelson, Emily S.; Licatta, Angelo; Griffin, Devon
2007-01-01
The possibility of a traumatic bone fracture in space is a concern due to the observed decrease in astronaut bone mineral density (BMD) during spaceflight and because of the physical demands of the mission. The Bone Fracture Risk Module (BFxRM) was developed to quantify the probability of fracture at the femoral neck and lumbar spine during space exploration missions. The BFxRM is scenario-based, providing predictions for specific activities or events during a particular space mission. The key elements of the BFxRM are the mission parameters, the biomechanical loading models, the bone loss and fracture models and the incidence rate of the activity or event. Uncertainties in the model parameters arise due to variations within the population and unknowns associated with the effects of the space environment. Consequently, parameter distributions were used in Monte Carlo simulations to obtain an estimate of fracture probability under real mission scenarios. The model predicts an increase in the probability of fracture as the mission length increases and fracture is more likely in the higher gravitational field of Mars than on the moon. The resulting probability predictions and sensitivity analyses of the BFxRM can be used as an engineering tool for mission operation and resource planning in order to mitigate the risk of bone fracture in space.
Risk Assessment of Bone Fracture During Space Exploration Missions to the Moon and Mars
NASA Technical Reports Server (NTRS)
Lewandowski, Beth E.; Myers, Jerry G.; Nelson, Emily S.; Griffin, Devon
2008-01-01
The possibility of a traumatic bone fracture in space is a concern due to the observed decrease in astronaut bone mineral density (BMD) during spaceflight and because of the physical demands of the mission. The Bone Fracture Risk Module (BFxRM) was developed to quantify the probability of fracture at the femoral neck and lumbar spine during space exploration missions. The BFxRM is scenario-based, providing predictions for specific activities or events during a particular space mission. The key elements of the BFxRM are the mission parameters, the biomechanical loading models, the bone loss and fracture models and the incidence rate of the activity or event. Uncertainties in the model parameters arise due to variations within the population and unknowns associated with the effects of the space environment. Consequently, parameter distributions were used in Monte Carlo simulations to obtain an estimate of fracture probability under real mission scenarios. The model predicts an increase in the probability of fracture as the mission length increases and fracture is more likely in the higher gravitational field of Mars than on the moon. The resulting probability predictions and sensitivity analyses of the BFxRM can be used as an engineering tool for mission operation and resource planning in order to mitigate the risk of bone fracture in space.
Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Bacon, Diana H.; Engel, David W.
2014-08-01
In this study, we apply an uncertainty quantification (UQ) framework to CO2 sequestration problems. In one scenario, we look at the risk of wellbore leakage of CO2 into a shallow unconfined aquifer in an urban area; in another scenario, we study the effects of reservoir heterogeneity on CO2 migration. We combine various sampling approaches (quasi-Monte Carlo, probabilistic collocation, and adaptive sampling) in order to reduce the number of forward calculations while trying to fully explore the input parameter space and quantify the input uncertainty. The CO2 migration is simulated using the PNNL-developed simulator STOMP-CO2e (the water-salt-CO2 module). For computationally demandingmore » simulations with 3D heterogeneity fields, we combined the framework with a scalable version module, eSTOMP, as the forward modeling simulator. We built response curves and response surfaces of model outputs with respect to input parameters, to look at the individual and combined effects, and identify and rank the significance of the input parameters.« less
NASA Astrophysics Data System (ADS)
Li, Xin; Hong, Yifeng; Wang, Jinfang; Liu, Yang; Sun, Xun; Li, Mi
2018-01-01
Numerous communication techniques and optical devices successfully applied in space optical communication system indicates a good portability of it. With this good portability, typical coherent demodulation technique of Costas loop can be easily adopted in space optical communication system. As one of the components of pointing error, the effect of jitter plays an important role in the communication quality of such system. Here, we obtain the probability density functions (PDF) of different jitter degrees and explain their essential effect on the bit error rate (BER) space optical communication system. Also, under the effect of jitter, we research the bit error rate of space coherent optical communication system using Costas loop with different system parameters of transmission power, divergence angle, receiving diameter, avalanche photodiode (APD) gain, and phase deviation caused by Costas loop. Through a numerical simulation of this kind of communication system, we demonstrate the relationship between the BER and these system parameters, and some corresponding methods of system optimization are presented to enhance the communication quality.
Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras
NASA Astrophysics Data System (ADS)
He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.
RENEW v3.2 user's manual, maintenance estimation simulation for Space Station Freedom Program
NASA Technical Reports Server (NTRS)
Bream, Bruce L.
1993-01-01
RENEW is a maintenance event estimation simulation program developed in support of the Space Station Freedom Program (SSFP). This simulation uses reliability and maintainability (R&M) and logistics data to estimate both average and time dependent maintenance demands. The simulation uses Monte Carlo techniques to generate failure and repair times as a function of the R&M and logistics parameters. The estimates are generated for a single type of orbital replacement unit (ORU). The simulation has been in use by the SSFP Work Package 4 prime contractor, Rocketdyne, since January 1991. The RENEW simulation gives closer estimates of performance since it uses a time dependent approach and depicts more factors affecting ORU failure and repair than steady state average calculations. RENEW gives both average and time dependent demand values. Graphs of failures over the mission period and yearly failure occurrences are generated. The averages demand rate for the ORU over the mission period is also calculated. While RENEW displays the results in graphs, the results are also available in a data file for further use by spreadsheets or other programs. The process of using RENEW starts with keyboard entry of the R&M and operational data. Once entered, the data may be saved in a data file for later retrieval. The parameters may be viewed and changed after entry using RENEW. The simulation program runs the number of Monte Carlo simulations requested by the operator. Plots and tables of the results can be viewed on the screen or sent to a printer. The results of the simulation are saved along with the input data. Help screens are provided with each menu and data entry screen.
Martin, Bryn A.; Kalata, Wojciech; Shaffer, Nicholas; Fischer, Paul; Luciano, Mark; Loth, Francis
2013-01-01
Elevated or reduced velocity of cerebrospinal fluid (CSF) at the craniovertebral junction (CVJ) has been associated with type I Chiari malformation (CMI). Thus, quantification of hydrodynamic parameters that describe the CSF dynamics could help assess disease severity and surgical outcome. In this study, we describe the methodology to quantify CSF hydrodynamic parameters near the CVJ and upper cervical spine utilizing subject-specific computational fluid dynamics (CFD) simulations based on in vivo MRI measurements of flow and geometry. Hydrodynamic parameters were computed for a healthy subject and two CMI patients both pre- and post-decompression surgery to determine the differences between cases. For the first time, we present the methods to quantify longitudinal impedance (LI) to CSF motion, a subject-specific hydrodynamic parameter that may have value to help quantify the CSF flow blockage severity in CMI. In addition, the following hydrodynamic parameters were quantified for each case: maximum velocity in systole and diastole, Reynolds and Womersley number, and peak pressure drop during the CSF cardiac flow cycle. The following geometric parameters were quantified: cross-sectional area and hydraulic diameter of the spinal subarachnoid space (SAS). The mean values of the geometric parameters increased post-surgically for the CMI models, but remained smaller than the healthy volunteer. All hydrodynamic parameters, except pressure drop, decreased post-surgically for the CMI patients, but remained greater than in the healthy case. Peak pressure drop alterations were mixed. To our knowledge this study represents the first subject-specific CFD simulation of CMI decompression surgery and quantification of LI in the CSF space. Further study in a larger patient and control group is needed to determine if the presented geometric and/or hydrodynamic parameters are helpful for surgical planning. PMID:24130704
Modeling the Structure and Dynamics of Dwarf Spheroidal Galaxies with Dark Matter and Tides
NASA Astrophysics Data System (ADS)
Muñoz, Ricardo R.; Majewski, Steven R.; Johnston, Kathryn V.
2008-05-01
We report the results of N-body simulations of disrupting satellites aimed at exploring whether the observed features of dSphs can be accounted for with simple, mass-follows-light (MFL) models including tidal disruption. As a test case, we focus on the Carina dwarf spheroidal (dSph), which presently is the dSph system with the most extensive data at large radius. We find that previous N-body, MFL simulations of dSphs did not sufficiently explore the parameter space of satellite mass, density, and orbital shape to find adequate matches to Galactic dSph systems, whereas with a systematic survey of parameter space we are able to find tidally disrupting, MFL satellite models that rather faithfully reproduce Carina's velocity profile, velocity dispersion profile, and projected density distribution over its entire sampled radius. The successful MFL model satellites have very eccentric orbits, currently favored by CDM models, and central velocity dispersions that still yield an accurate representation of the bound mass and observed central M/L ~ 40 of Carina, despite inflation of the velocity dispersion outside the dSph core by unbound debris. Our survey of parameter space also allows us to address a number of commonly held misperceptions of tidal disruption and its observable effects on dSph structure and dynamics. The simulations suggest that even modest tidal disruption can have a profound effect on the observed dynamics of dSph stars at large radii. Satellites that are well described by tidally disrupting MFL models could still be fully compatible with ΛCDM if, for example, they represent a later stage in the evolution of luminous subhalos.
Largescale Long-term particle Simulations of Runaway electrons in Tokamaks
NASA Astrophysics Data System (ADS)
Liu, Jian; Qin, Hong; Wang, Yulei
2016-10-01
To understand runaway dynamical behavior is crucial to assess the safety of tokamaks. Though many important analytical and numerical results have been achieved, the overall dynamic behaviors of runaway electrons in a realistic tokamak configuration is still rather vague. In this work, the secular full-orbit simulations of runaway electrons are carried out based on a relativistic volume-preserving algorithm. Detailed phase-space behaviors of runaway electrons are investigated in different timescales spanning 11 orders. A detailed analysis of the collisionless neoclassical scattering is provided when considering the coupling between the rotation of momentum vector and the background field. In large timescale, the initial condition of runaway electrons in phase space globally influences the runaway distribution. It is discovered that parameters and field configuration of tokamaks can modify the runaway electron dynamics significantly. Simulations on 10 million cores of supercomputer using the APT code have been completed. A resolution of 107 in phase space is used, and simulations are performed for 1011 time steps. Largescale simulations show that in a realistic fusion reactor, the concern of runaway electrons is not as serious as previously thought. This research was supported by National Magnetic Connement Fusion Energy Research Project (2015GB111003, 2014GB124005), the National Natural Science Foundation of China (NSFC-11575185, 11575186) and the GeoAlgorithmic Plasma Simulator (GAPS) Project.
Modeling to predict pilot performance during CDTI-based in-trail following experiments
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Goka, T.
1984-01-01
A mathematical model was developed of the flight system with the pilot using a cockpit display of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. Both in-trail and vertical dynamics were included. The nominal spacing was based on one of three criteria (Constant Time Predictor; Constant Time Delay; or Acceleration Cue). This model was used to simulate digitally the dynamics of a string of multiple following aircraft, including response to initial position errors. The simulation was used to predict the outcome of a series of in-trail following experiments, including pilot performance in maintaining correct longitudinal spacing and vertical position. The experiments were run in the NASA Ames Research Center multi-cab cockpit simulator facility. The experimental results were then used to evaluate the model and its prediction accuracy. Model parameters were adjusted, so that modeled performance matched experimental results. Lessons learned in this modeling and prediction study are summarized.
NASA Astrophysics Data System (ADS)
Jones, Scott B.; Or, Dani
1999-04-01
Plants grown in porous media are part of a bioregenerative life support system designed for long-duration space missions. Reduced gravity conditions of orbiting spacecraft (microgravity) alter several aspects of liquid flow and distribution within partially saturated porous media. The objectives of this study were to evaluate the suitability of conventional capillary flow theory in simulating water distribution in porous media measured in a microgravity environment. Data from experiments aboard the Russian space station Mir and a U.S. space shuttle were simulated by elimination of the gravitational term from the Richards equation. Qualitative comparisons with media hydraulic parameters measured on Earth suggest narrower pore size distributions and inactive or nonparticipating large pores in microgravity. Evidence of accentuated hysteresis, altered soil-water characteristic, and reduced unsaturated hydraulic conductivity from microgravity simulations may be attributable to a number of proposed secondary mechanisms. These are likely spawned by enhanced and modified paths of interfacial flows and an altered force ratio of capillary to body forces in microgravity.
NASA Technical Reports Server (NTRS)
Plitau, Denis; Prasad, Narasimha S.
2012-01-01
The Active Sensing of CO2 Emissions over Nights Days and Seasons (ASCENDS) mission recommended by the NRC Decadal Survey has a desired accuracy of 0.3% in carbon dioxide mixing ratio (XCO2) retrievals requiring careful selection and optimization of the instrument parameters. NASA Langley Research Center (LaRC) is investigating 1.57 micron carbon dioxide as well as the 1.26-1.27 micron oxygen bands for our proposed ASCENDS mission requirements investigation. Simulation studies are underway for these bands to select optimum instrument parameters. The simulations are based on a multi-wavelength lidar modeling framework being developed at NASA LaRC to predict the performance of CO2 and O2 sensing from space and airborne platforms. The modeling framework consists of a lidar simulation module and a line-by-line calculation component with interchangeable lineshape routines to test the performance of alternative lineshape models in the simulations. As an option the line-by-line radiative transfer model (LBLRTM) program may also be used for line-by-line calculations. The modeling framework is being used to perform error analysis, establish optimum measurement wavelengths as well as to identify the best lineshape models to be used in CO2 and O2 retrievals. Several additional programs for HITRAN database management and related simulations are planned to be included in the framework. The description of the modeling framework with selected results of the simulation studies for CO2 and O2 sensing is presented in this paper.
Numerical simulations of high-energy flows in accreting magnetic white dwarfs
NASA Astrophysics Data System (ADS)
Van Box Som, Lucile; Falize, É.; Bonnet-Bidaud, J.-M.; Mouchet, M.; Busschaert, C.; Ciardi, A.
2018-01-01
Some polars show quasi-periodic oscillations (QPOs) in their optical light curves that have been interpreted as the result of shock oscillations driven by the cooling instability. Although numerical simulations can recover this physics, they wrongly predict QPOs in the X-ray luminosity and have also failed to reproduce the observed frequencies, at least for the limited range of parameters explored so far. Given the uncertainties on the observed polar parameters, it is still unclear whether simulations can reproduce the observations. The aim of this work is to study QPOs covering all relevant polars showing QPOs. We perform numerical simulations including gravity, cyclotron and bremsstrahlung radiative losses, for a wide range of polar parameters, and compare our results with the astronomical data using synthetic X-ray and optical luminosities. We show that shock oscillations are the result of complex shock dynamics triggered by the interplay of two radiative instabilities. The secondary shock forms at the acoustic horizon in the post-shock region in agreement with our estimates from steady-state solutions. We also demonstrate that the secondary shock is essential to sustain the accretion shock oscillations at the average height predicted by our steady-state accretion model. Finally, in spite of the large explored parameter space, matching the observed QPO parameters requires a combination of parameters inconsistent with the observed ones. This difficulty highlights the limits of one-dimensional simulations, suggesting that multi-dimensional effects are needed to understand the non-linear dynamics of accretion columns in polars and the origins of QPOs.
Oscillatory cellular patterns in three-dimensional directional solidification
NASA Astrophysics Data System (ADS)
Tourret, D.; Debierre, J.-M.; Song, Y.; Mota, F. L.; Bergeon, N.; Guérin, R.; Trivedi, R.; Billia, B.; Karma, A.
2015-10-01
We present a phase-field study of oscillatory breathing modes observed during the solidification of three-dimensional cellular arrays in microgravity. Directional solidification experiments conducted onboard the International Space Station have allowed us to observe spatially extended homogeneous arrays of cells and dendrites while minimizing the amount of gravity-induced convection in the liquid. In situ observations of transparent alloys have revealed the existence, over a narrow range of control parameters, of oscillations in cellular arrays with a period ranging from about 25 to 125 min. Cellular patterns are spatially disordered, and the oscillations of individual cells are spatiotemporally uncorrelated at long distance. However, in regions displaying short-range spatial ordering, groups of cells can synchronize into oscillatory breathing modes. Quantitative phase-field simulations show that the oscillatory behavior of cells in this regime is linked to a stability limit of the spacing in hexagonal cellular array structures. For relatively high cellular front undercooling (i.e., low growth velocity or high thermal gradient), a gap appears in the otherwise continuous range of stable array spacings. Close to this gap, a sustained oscillatory regime appears with a period that compares quantitatively well with experiment. For control parameters where this gap exists, oscillations typically occur for spacings at the edge of the gap. However, after a change of growth conditions, oscillations can also occur for nearby values of control parameters where this gap just closes and a continuous range of spacings exists. In addition, sustained oscillations at to the opening of this stable gap exhibit a slow periodic modulation of the phase-shift among cells with a slower period of several hours. While long-range coherence of breathing modes can be achieved in simulations for a perfect spatial arrangement of cells as initial condition, global disorder is observed in both three-dimensional experiments and simulations from realistic noisy initial conditions. In the latter case, erratic tip-splitting events promoted by large-amplitude oscillations contribute to maintaining the long-range array disorder, unlike in thin-sample experiments where long-range coherence of oscillations is experimentally observable.
Oscillatory cellular patterns in three-dimensional directional solidification
Tourret, D.; Debierre, J. -M.; Song, Y.; ...
2015-09-11
We present a phase-field study of oscillatory breathing modes observed during the solidification of three-dimensional cellular arrays in micro-gravity. Directional solidification experiments conducted onboard the International Space Station have allowed for the first time to observe spatially extended homogeneous arrays of cells and dendrites while minimizing the amount of gravity-induced convection in the liquid. In situ observations of transparent alloys have revealed the existence, over a narrow range of control parameters, of oscillations in cellular arrays with a period ranging from about 25 to 125 minutes. Cellular patterns are spatially disordered, and the oscillations of individual cells are spatiotemporally uncorrelatedmore » at long distance. However, in regions displaying short-range spatial ordering, groups of cells can synchronize into oscillatory breathing modes. Quantitative phase-field simulations show that the oscillatory behavior of cells in this regime is linked to a stability limit of the spacing in hexagonal cellular array structures. For relatively high cellular front undercooling (\\ie low growth velocity or high thermal gradient), a gap appears in the otherwise continuous range of stable array spacings. Close to this gap, a sustained oscillatory regime appears with a period that compares quantitatively well with experiment. For control parameters where this gap exist, oscillations typically occur for spacings at the edge of the gap. However, after a change of growth conditions, oscillations can also occur for nearby values of control parameters where this gap just closes and a continuous range of spacings exists. In addition, sustained oscillations at to the opening of this stable gap exhibit a slow periodic modulation of the phase-shift among cells with a slower period of several hours. While long-range coherence of breathing modes can be achieved in simulations for a perfect spatial arrangement of cells as initial condition, global disorder is observed in both three-dimensional experiments and simulations from realistic noisy initial conditions. The, erratic tip splitting events promoted by large amplitude oscillations contribute to maintaining the long-range array disorder, unlike in thin sample experiments where long-range coherence of oscillations is experimentally observable.« less
Oscillatory cellular patterns in three-dimensional directional solidification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourret, D.; Debierre, J. -M.; Song, Y.
We present a phase-field study of oscillatory breathing modes observed during the solidification of three-dimensional cellular arrays in micro-gravity. Directional solidification experiments conducted onboard the International Space Station have allowed for the first time to observe spatially extended homogeneous arrays of cells and dendrites while minimizing the amount of gravity-induced convection in the liquid. In situ observations of transparent alloys have revealed the existence, over a narrow range of control parameters, of oscillations in cellular arrays with a period ranging from about 25 to 125 minutes. Cellular patterns are spatially disordered, and the oscillations of individual cells are spatiotemporally uncorrelatedmore » at long distance. However, in regions displaying short-range spatial ordering, groups of cells can synchronize into oscillatory breathing modes. Quantitative phase-field simulations show that the oscillatory behavior of cells in this regime is linked to a stability limit of the spacing in hexagonal cellular array structures. For relatively high cellular front undercooling (\\ie low growth velocity or high thermal gradient), a gap appears in the otherwise continuous range of stable array spacings. Close to this gap, a sustained oscillatory regime appears with a period that compares quantitatively well with experiment. For control parameters where this gap exist, oscillations typically occur for spacings at the edge of the gap. However, after a change of growth conditions, oscillations can also occur for nearby values of control parameters where this gap just closes and a continuous range of spacings exists. In addition, sustained oscillations at to the opening of this stable gap exhibit a slow periodic modulation of the phase-shift among cells with a slower period of several hours. While long-range coherence of breathing modes can be achieved in simulations for a perfect spatial arrangement of cells as initial condition, global disorder is observed in both three-dimensional experiments and simulations from realistic noisy initial conditions. The, erratic tip splitting events promoted by large amplitude oscillations contribute to maintaining the long-range array disorder, unlike in thin sample experiments where long-range coherence of oscillations is experimentally observable.« less
Utilization of Short-Simulations for Tuning High-Resolution Climate Model
NASA Astrophysics Data System (ADS)
Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.
2016-12-01
Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (< 10 days ) and longer ( 1 year) Perturbed Parameters Ensemble (PPE) simulations at low resolution to identify model feature sensitivity to parameter changes. The CAPT tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in greater detail once an educated set of parameter choice is selected. Limitations on using short-term simulations for tuning climate model are also discussed.
NASA Technical Reports Server (NTRS)
Huang, Alex S.; Balasubramanian, Siva; Tepelus, Tudor; Sadda, Jaya; Sadda, Srinivas; Stenger, Michael B.; Lee, Stuart M. C.; Laurie, Steve S.; Liu, John; Macias, Brandon R.
2017-01-01
Changes in vision have been well documented among astronauts during and after long-duration space flight. One hypothesis is that the space flight induced headward fluid alters posterior ocular pressure and volume and may contribute to visual acuity decrements. Therefore, we evaluated venoconstrictive thigh cuffs as a potential countermeasure to the headward fluid shift-induced effects on intraocular pressure (IOP) and cephalic vascular pressure and volumes.
Hands-on parameter search for neural simulations by a MIDI-controller.
Eichner, Hubert; Borst, Alexander
2011-01-01
Computational neuroscientists frequently encounter the challenge of parameter fitting--exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems.
Hands-On Parameter Search for Neural Simulations by a MIDI-Controller
Eichner, Hubert; Borst, Alexander
2011-01-01
Computational neuroscientists frequently encounter the challenge of parameter fitting – exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems. PMID:22066027
Estimation of channel parameters and background irradiance for free-space optical link.
Khatoon, Afsana; Cowley, William G; Letzepis, Nick; Giggenbach, Dirk
2013-05-10
Free-space optical communication can experience severe fading due to optical scintillation in long-range links. Channel estimation is also corrupted by background and electrical noise. Accurate estimation of channel parameters and scintillation index (SI) depends on perfect removal of background irradiance. In this paper, we propose three different methods, the minimum-value (MV), mean-power (MP), and maximum-likelihood (ML) based methods, to remove the background irradiance from channel samples. The MV and MP methods do not require knowledge of the scintillation distribution. While the ML-based method assumes gamma-gamma scintillation, it can be easily modified to accommodate other distributions. Each estimator's performance is compared using simulation data as well as experimental measurements. The estimators' performance are evaluated from low- to high-SI areas using simulation data as well as experimental trials. The MV and MP methods have much lower complexity than the ML-based method. However, the ML-based method shows better SI and background-irradiance estimation performance.
Building the Case for SNAP: Creation of Multi-Band, Simulated Images With Shapelets
NASA Technical Reports Server (NTRS)
Ferry, Matthew A.
2005-01-01
Dark energy has simultaneously been the most elusive and most important phenomenon in the shaping of the universe. A case for a proposed space-telescope called SNAP (SuperNova Acceleration Probe) is being built, a crucial component of which is image simulations. One method for this is "Shapelets," developed at Caltech. Shapelets form an orthonormal basis and are uniquely able to represent realistic space images and create new images based on real ones. Previously, simulations were created using the Hubble Deep Field (HDF) as a basis Set in one band. In this project, image simulations are created.using the 4 bands of the Hubble Ultra Deep Field (UDF) as a basis set. This provides a better basis for simulations because (1) the survey is deeper, (2) they have a higher resolution, and (3) this is a step closer to simulating the 9 bands of SNAP. Image simulations are achieved by detecting sources in the UDF, decomposing them into shapelets, tweaking their parameters in realistic ways, and recomposing them into new images. Morphological tests were also run to verify the realism of the simulations. They have a wide variety of uses, including the ability to create weak gravitational lensing simulations.
Oracle estimation of parametric models under boundary constraints.
Wong, Kin Yau; Goldberg, Yair; Fine, Jason P
2016-12-01
In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Xu, Zheyao; Qi, Naiming; Chen, Yukun
2015-12-01
Spacecraft simulators are widely used to study the dynamics, guidance, navigation, and control of a spacecraft on the ground. A spacecraft simulator can have three rotational degrees of freedom by using a spherical air-bearing to simulate a frictionless and micro-gravity space environment. The moment of inertia and center of mass are essential for control system design of ground-based three-axis spacecraft simulators. Unfortunately, they cannot be known precisely. This paper presents two approaches, i.e. a recursive least-squares (RLS) approach with tracking differentiator (TD) and Extended Kalman Filter (EKF) method, to estimate inertia parameters. The tracking differentiator (TD) filter the noise coupled with the measured signals and generate derivate of the measured signals. Combination of two TD filters in series obtains the angular accelerations that are required in RLS (TD-TD-RLS). Another method that does not need to estimate the angular accelerations is using the integrated form of dynamics equation. An extended TD (ETD) filter which can also generate the integration of the function of signals is presented for RLS (denoted as ETD-RLS). States and inertia parameters are estimated simultaneously using EKF. The observability is analyzed. All proposed methods are illustrated by simulations and experiments.
Simulated Wake Characteristics Data for Closely Spaced Parallel Runway Operations Analysis
NASA Technical Reports Server (NTRS)
Guerreiro, Nelson M.; Neitzke, Kurt W.
2012-01-01
A simulation experiment was performed to generate and compile wake characteristics data relevant to the evaluation and feasibility analysis of closely spaced parallel runway (CSPR) operational concepts. While the experiment in this work is not tailored to any particular operational concept, the generated data applies to the broader class of CSPR concepts, where a trailing aircraft on a CSPR approach is required to stay ahead of the wake vortices generated by a lead aircraft on an adjacent CSPR. Data for wake age, circulation strength, and wake altitude change, at various lateral offset distances from the wake-generating lead aircraft approach path were compiled for a set of nine aircraft spanning the full range of FAA and ICAO wake classifications. A total of 54 scenarios were simulated to generate data related to key parameters that determine wake behavior. Of particular interest are wake age characteristics that can be used to evaluate both time- and distance- based in-trail separation concepts for all aircraft wake-class combinations. A simple first-order difference model was developed to enable the computation of wake parameter estimates for aircraft models having weight, wingspan and speed characteristics similar to those of the nine aircraft modeled in this work.
Dirac Cellular Automaton from Split-step Quantum Walk
Mallick, Arindam; Chandrashekar, C. M.
2016-01-01
Simulations of one quantum system by an other has an implication in realization of quantum machine that can imitate any quantum system and solve problems that are not accessible to classical computers. One of the approach to engineer quantum simulations is to discretize the space-time degree of freedom in quantum dynamics and define the quantum cellular automata (QCA), a local unitary update rule on a lattice. Different models of QCA are constructed using set of conditions which are not unique and are not always in implementable configuration on any other system. Dirac Cellular Automata (DCA) is one such model constructed for Dirac Hamiltonian (DH) in free quantum field theory. Here, starting from a split-step discrete-time quantum walk (QW) which is uniquely defined for experimental implementation, we recover the DCA along with all the fine oscillations in position space and bridge the missing connection between DH-DCA-QW. We will present the contribution of the parameters resulting in the fine oscillations on the Zitterbewegung frequency and entanglement. The tuneability of the evolution parameters demonstrated in experimental implementation of QW will establish it as an efficient tool to design quantum simulator and approach quantum field theory from principles of quantum information theory. PMID:27184159
NASA Astrophysics Data System (ADS)
Ginsburger, Kévin; Poupon, Fabrice; Beaujoin, Justine; Estournet, Delphine; Matuschke, Felix; Mangin, Jean-François; Axer, Markus; Poupon, Cyril
2018-02-01
White matter is composed of irregularly packed axons leading to a structural disorder in the extra-axonal space. Diffusion MRI experiments using oscillating gradient spin echo sequences have shown that the diffusivity transverse to axons in this extra-axonal space is dependent on the frequency of the employed sequence. In this study, we observe the same frequency-dependence using 3D simulations of the diffusion process in disordered media. We design a novel white matter numerical phantom generation algorithm which constructs biomimicking geometric configurations with few design parameters, and enables to control the level of disorder of the generated phantoms. The influence of various geometrical parameters present in white matter, such as global angular dispersion, tortuosity, presence of Ranvier nodes, beading, on the extra-cellular perpendicular diffusivity frequency dependence was investigated by simulating the diffusion process in numerical phantoms of increasing complexity and fitting the resulting simulated diffusion MR signal attenuation with an adequate analytical model designed for trapezoidal OGSE sequences. This work suggests that angular dispersion and especially beading have non-negligible effects on this extracellular diffusion metrics that may be measured using standard OGSE DW-MRI clinical protocols.
A sparse representation of gravitational waves from precessing compact binaries
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Szilagyi, Bela; Galley, Chad; Tiglio, Manuel
2014-03-01
With the advanced generation of gravitational wave detectors coming online in the near future, there is a need for accurate models of gravitational waveforms emitted by binary neutron stars and/or black holes. Post-Newtonian approximations work well for the early inspiral and there are models covering the late inspiral as well as merger and ringdown for the non-precessing case. While numerical relativity simulations have no difficulty with precession and can now provide accurate waveforms for a broad range of parameters, covering the 7 dimensional precessing parameter space with ~107 simulations is not feasible. There is still hope, as reduced order modelling techniques have been highly successful in reducing the impact of the curse of dimensionality for lower dimensional cases. We construct a reduced basis of Post-Newtonian waveforms for the full parameter space with mass ratios up to 10 and spins up to 0 . 9 , and find that for the last 100 orbits only ~ 50 waveforms are needed. The huge compression relies heavily on a reparametrization which seeks to reduce the non-linearity of the waveforms. We also show that the addition of merger and ringdown only mildly increases the size of the basis.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, A. K.
1979-01-01
Design adequacy of the lead-lag compensator of the frequency loop, accuracy checking of the analytical expression for the electrical motor transfer function, and performance evaluation of the speed control servo of the digital tape recorder used on-board the 1976 Viking Mars Orbiters and Voyager 1977 Jupiter-Saturn flyby spacecraft are analyzed. The transfer functions of the most important parts of a simplified frequency loop used for test simulation are described and ten simulation cases are reported. The first four of these cases illustrate the method of selecting the most suitable transfer function for the hysteresis synchronous motor, while the rest verify and determine the servo performance parameters and alternative servo compensation schemes. It is concluded that the linear methods provide a starting point for the final verification/refinement of servo design by nonlinear time response simulation and that the variation of the parameters of the static/dynamic Coulomb friction is as expected in a long-life space mission environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Justin Matthew
These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less
Cloud GPU-based simulations for SQUAREMR.
Kantasis, George; Xanthis, Christos G; Haris, Kostas; Heiberg, Einar; Aletras, Anthony H
2017-01-01
Quantitative Magnetic Resonance Imaging (MRI) is a research tool, used more and more in clinical practice, as it provides objective information with respect to the tissues being imaged. Pixel-wise T 1 quantification (T 1 mapping) of the myocardium is one such application with diagnostic significance. A number of mapping sequences have been developed for myocardial T 1 mapping with a wide range in terms of measurement accuracy and precision. Furthermore, measurement results obtained with these pulse sequences are affected by errors introduced by the particular acquisition parameters used. SQUAREMR is a new method which has the potential of improving the accuracy of these mapping sequences through the use of massively parallel simulations on Graphical Processing Units (GPUs) by taking into account different acquisition parameter sets. This method has been shown to be effective in myocardial T 1 mapping; however, execution times may exceed 30min which is prohibitively long for clinical applications. The purpose of this study was to accelerate the construction of SQUAREMR's multi-parametric database to more clinically acceptable levels. The aim of this study was to develop a cloud-based cluster in order to distribute the computational load to several GPU-enabled nodes and accelerate SQUAREMR. This would accommodate high demands for computational resources without the need for major upfront equipment investment. Moreover, the parameter space explored by the simulations was optimized in order to reduce the computational load without compromising the T 1 estimates compared to a non-optimized parameter space approach. A cloud-based cluster with 16 nodes resulted in a speedup of up to 13.5 times compared to a single-node execution. Finally, the optimized parameter set approach allowed for an execution time of 28s using the 16-node cluster, without compromising the T 1 estimates by more than 10ms. The developed cloud-based cluster and optimization of the parameter set reduced the execution time of the simulations involved in constructing the SQUAREMR multi-parametric database thus bringing SQUAREMR's applicability within time frames that would be likely acceptable in the clinic. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pang, Xiaoying; Rybarcyk, Larry
HPSim is a GPU-accelerated online multi-particle beam dynamics simulation tool for ion linacs. It was originally developed for use on the Los Alamos 800-MeV proton linac. It is a “z-code” that contains typical linac beam transport elements. The linac RF-gap transformation utilizes transit-time-factors to calculate the beam acceleration therein. The space-charge effects are computed using the 2D SCHEFF (Space CHarge EFFect) algorithm, which calculates the radial and longitudinal space charge forces for cylindrically symmetric beam distributions. Other space- charge routines to be incorporated include the 3D PICNIC and a 3D Poisson solver. HPSim can simulate beam dynamics in drift tubemore » linacs (DTLs) and coupled cavity linacs (CCLs). Elliptical superconducting cavity (SC) structures will also be incorporated into the code. The computational core of the code is written in C++ and accelerated using the NVIDIA CUDA technology. Users access the core code, which is wrapped in Python/C APIs, via Pythons scripts that enable ease-of-use and automation of the simulations. The overall linac description including the EPICS PV machine control parameters is kept in an SQLite database that also contains calibration and conversion factors required to transform the machine set points into model values used in the simulation.« less
NASA Astrophysics Data System (ADS)
Cara, Javier
2016-05-01
Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.
HF-START: A Regional Radio Propagation Simulator
NASA Astrophysics Data System (ADS)
Hozumi, K.; Maruyama, T.; Saito, S.; Nakata, H.; Rougerie, S.; Yokoyama, T.; Jin, H.; Tsugawa, T.; Ishii, M.
2017-12-01
HF-START (HF Simulator Targeting for All-users' Regional Telecommunications) is a user-friendly simulator developed to meet the needs of space weather users. Prediction of communications failure due to space weather disturbances is of high priority. Space weather users from various backgrounds with high economic impact, i.e. airlines, telecommunication companies, GPS-related companies, insurance companies, international amateur radio union, etc., recently increase. Space weather information provided by Space Weather Information Center of NICT is, however, too professional to be understood and effectively used by the users. To overcome this issue, I try to translate the research level data to the user level data based on users' needs and provide an immediate usable data. HF-START is positioned to be a space weather product out of laboratory based truly on users' needs. It is originally for radio waves in HF band (3-30 MHz) but higher frequencies up to L band are planned to be covered. Regional ionospheric data in Japan and southeast Asia are employed as a reflector of skywave mode propagation. GAIA (Ground-to-topside model of Atmosphere and Ionosphere for Aeronomy) model will be used as ionospheric input for global simulation. To evaluate HF-START, an evaluation campaign for Japan region will be launched in coming months. If the campaign successes, it will be expanded to southeast Asia region as well. The final goal of HF-START is to provide the near-realtime necessary radio parameters as well as the warning message of radio communications failure to the radio and space weather users.
Parameter Studies, time-dependent simulations and design with automated Cartesian methods
NASA Technical Reports Server (NTRS)
Aftosmis, Michael
2005-01-01
Over the past decade, NASA has made a substantial investment in developing adaptive Cartesian grid methods for aerodynamic simulation. Cartesian-based methods played a key role in both the Space Shuttle Accident Investigation and in NASA's return to flight activities. The talk will provide an overview of recent technological developments focusing on the generation of large-scale aerodynamic databases, automated CAD-based design, and time-dependent simulations with of bodies in relative motion. Automation, scalability and robustness underly all of these applications and research in each of these topics will be presented.
Study of Far—Field Directivity Pattern for Linear Arrays
NASA Astrophysics Data System (ADS)
Ana-Maria, Chiselev; Luminita, Moraru; Laura, Onose
2011-10-01
A model to calculate directivity pattern in far field is developed in this paper. Based on this model, the three-dimensional beam pattern is introduced and analyzed in order to investigate geometric parameters of linear arrays and their influences on the directivity pattern. Simulations in azimuthal plane are made to highlight the influence of transducers parameters, including number of elements and inter-element spacing. It is true that these parameters are important factors that influence the directivity pattern and the appearance of side-lobes for linear arrays.
Development and Simulation Studies of a Novel Electromagnetics Code
2011-10-20
121 Bibliography 123 LIST OF TABLES xii List of Tables 3.1 The rf photoinjector beam parameters of the BNL 2.856 GHz and the ANL AWA 1.3 GHz guns...examples of field plots. The space-charge fields are numerically computed with the parameters of BNL 2.856 GHz gun. Figure 3.2 shows a 3D plot of Er vs...the BNL 2.856 GHz and the ANL AWA 1.3 GHz guns. The main gun parameters are given in the Table 3.1. The distribution of the bunched beam can be
An optimal beam alignment method for large-scale distributed space surveillance radar system
NASA Astrophysics Data System (ADS)
Huang, Jian; Wang, Dongya; Xia, Shuangzhi
2018-06-01
Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.
Sun, Li; Hernandez-Guzman, Jessica; Warncke, Kurt
2009-01-01
Electron spin echo envelope modulation (ESEEM) is a technique of pulsed-electron paramagnetic resonance (EPR) spectroscopy. The analyis of ESEEM data to extract information about the nuclear and electronic structure of a disordered (powder) paramagnetic system requires accurate and efficient numerical simulations. A single coupled nucleus of known nuclear g value (gN) and spin I=1 can have up to eight adjustable parameters in the nuclear part of the spin Hamiltonian. We have developed OPTESIM, an ESEEM simulation toolbox, for automated numerical simulation of powder two- and three-pulse one-dimensional ESEEM for arbitrary number (N) and type (I, gN) of coupled nuclei, and arbitrary mutual orientations of the hyperfine tensor principal axis systems for N>1. OPTESIM is based in the Matlab environment, and includes the following features: (1) a fast algorithm for translation of the spin Hamiltonian into simulated ESEEM, (2) different optimization methods that can be hybridized to achieve an efficient coarse-to-fine grained search of the parameter space and convergence to a global minimum, (3) statistical analysis of the simulation parameters, which allows the identification of simultaneous confidence regions at specific confidence levels. OPTESIM also includes a geometry-preserving spherical averaging algorithm as default for N>1, and global optimization over multiple experimental conditions, such as the dephasing time ( ) for three-pulse ESEEM, and external magnetic field values. Application examples for simulation of 14N coupling (N=1, N=2) in biological and chemical model paramagnets are included. Automated, optimized simulations by using OPTESIM lead to a convergence on dramatically shorter time scales, relative to manual simulations. PMID:19553148
NASA Astrophysics Data System (ADS)
Badawy, B.; Fletcher, C. G.
2017-12-01
The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.
Ground-based testing of the dynamics of flexible space structures using band mechanisms
NASA Technical Reports Server (NTRS)
Yang, L. F.; Chew, Meng-Sang
1991-01-01
A suspension system based on a band mechanism is studied to provide the free-free conditions for ground based validation testing of flexible space structures. The band mechanism consists of a noncircular disk with a convex profile, preloaded by torsional springs at its center of rotation so that static equilibrium of the test structure is maintained at any vertical location; the gravitational force will be directly counteracted during dynamic testing of the space structure. This noncircular disk within the suspension system can be configured to remain unchanged for test articles with the different weights as long as the torsional spring is replaced to maintain the originally designed frequency ratio of W/k sub s. Simulations of test articles which are modeled as lumped parameter as well as continuous parameter systems, are also presented.
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
Zhang, Nan; Zhou, Peiheng; Cheng, Dengmu; Weng, Xiaolong; Xie, Jianliang; Deng, Longjiang
2013-04-01
We present the simulation, fabrication, and characterization of a dual-band metamaterial absorber in the mid-infrared regime. Two pairs of circular-patterned metal-dielectric stacks are employed to excite the dual-band absorption peaks. Dielectric characteristics of the dielectric spacing layer determine energy dissipation in each resonant stack, i.e., dielectric or ohmic loss. By controlling material parameters, both two mechanisms are introduced into our structure. Up to 98% absorption is obtained at 9.03 and 13.32 μm in the simulation, which is in reasonable agreement with experimental results. The proposed structure holds promise for various applications, e.g., thermal radiation modulators and multicolor infrared focal plane arrays.
Characterization and modeling of radiation effects NASA/MSFC semiconductor devices
NASA Technical Reports Server (NTRS)
Kerns, D. V., Jr.; Cook, K. B., Jr.
1978-01-01
A literature review of the near-Earth trapped radiation of the Van Allen Belts, the radiation within the solar system resulting from the solar wind, and the cosmic radiation levels of deep space showed that a reasonable simulation of space radiation, particularly the Earth orbital environment, could be simulated in the laboratory by proton bombardment. A 3 MeV proton accelerator was used to irradiate CMOS integrated circuits fabricated from three different processes. The drain current and output voltage for three inverters was recorded as the input voltage was swept from zero to ten volts after each successive irradiation. Device parameters were extracted. Possible damage mechanisms are discussed and recommendations for improved radiation hardness are suggested.
Guo, Zhun; Wang, Minghuai; Qian, Yun; ...
2014-08-13
In this study, we investigate the sensitivity of simulated shallow cumulus and stratocumulus clouds to selected tunable parameters of Cloud Layers Unified by Binormals (CLUBB) in the single column version of Community Atmosphere Model version 5 (SCAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is adopted to study the responses of simulated cloud fields to tunable parameters. One stratocumulus and two shallow convection cases are configured at both coarse and fine vertical resolutions in this study.. Our results show that most of the variance in simulated cloudmore » fields can be explained by a small number of tunable parameters. The parameters related to Newtonian and buoyancy-damping terms of total water flux are found to be the most influential parameters for stratocumulus. For shallow cumulus, the most influential parameters are those related to skewness of vertical velocity, reflecting the strong coupling between cloud properties and dynamics in this regime. The influential parameters in the stratocumulus case are sensitive to the choice of the vertical resolution while little sensitivity is found for the shallow convection cases, as eddy mixing length (or dissipation time scale) plays a more important role and depends more strongly on the vertical resolution in stratocumulus than in shallow convections. The influential parameters remain almost unchanged when the number of tunable parameters increases from 16 to 35. This study improves understanding of the CLUBB behavior associated with parameter uncertainties.« less
NASA Astrophysics Data System (ADS)
Gupta, Amit; Shaina, Nagpal
2017-08-01
Intersymbol interference and attenuation of signal are two major parameters affecting the quality of transmission in Free Space Optical (FSO) Communication link. In this paper, the impact of these parameters on FSO communication link is analysed for delivering high-quality data transmission. The performance of the link is investigated under the influence of amplifier in the link. The performance parameters of the link like minimum bit error rate, received signal power and Quality factor are examined by employing erbium-doped fibre amplifier in the link. The effects of amplifier are visualized with the amount of received power. Further, the link is simulated for moderate weather conditions at various attenuation levels on transmitted signal. Finally, the designed link is analysed in adverse weather conditions by using high-power laser source for optimum performance.
Heat transfer measurements for Stirling machine cylinders
NASA Technical Reports Server (NTRS)
Kornhauser, Alan A.; Kafka, B. C.; Finkbeiner, D. L.; Cantelmi, F. C.
1994-01-01
The primary purpose of this study was to measure the effects of inflow-produced heat turbulence on heat transfer in Stirling machine cylinders. A secondary purpose was to provide new experimental information on heat transfer in gas springs without inflow. The apparatus for the experiment consisted of a varying-volume piston-cylinder space connected to a fixed volume space by an orifice. The orifice size could be varied to adjust the level of inflow-produced turbulence, or the orifice plate could be removed completely so as to merge the two spaces into a single gas spring space. Speed, cycle mean pressure, overall volume ratio, and varying volume space clearance ratio could also be adjusted. Volume, pressure in both spaces, and local heat flux at two locations were measured. The pressure and volume measurements were used to calculate area averaged heat flux, heat transfer hysteresis loss, and other heat transfer-related effects. Experiments in the one space arrangement extended the range of previous gas spring tests to lower volume ratio and higher nondimensional speed. The tests corroborated previous results and showed that analytic models for heat transfer and loss based on volume ratio approaching 1 were valid for volume ratios ranging from 1 to 2, a range covering most gas springs in Stirling machines. Data from experiments in the two space arrangement were first analyzed based on lumping the two spaces together and examining total loss and averaged heat transfer as a function of overall nondimensional parameter. Heat transfer and loss were found to be significantly increased by inflow-produced turbulence. These increases could be modeled by appropriate adjustment of empirical coefficients in an existing semi-analytic model. An attempt was made to use an inverse, parameter optimization procedure to find the heat transfer in each of the two spaces. This procedure was successful in retrieving this information from simulated pressure-volume data with artificially generated noise, but it failed with the actual experimental data. This is evidence that the models used in the parameter optimization procedure (and to generate the simulated data) were not correct. Data from the surface heat flux sensors indicated that the primary shortcoming of these models was that they assumed turbulence levels to be constant over the cycle. Sensor data in the varying volume space showed a large increase in heat flux, probably due to turbulence, during the expansion stroke.
Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald
2018-10-01
In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Urban dispersion and air quality simulation models applied at various horizontal scales require different levels of fidelity for specifying the characteristics of the underlying surfaces. As the modeling scales approach the neighborhood level (~1 km horizontal grid spacing), the...
Yang, Ben; Qian, Yun; Berg, Larry K.; ...
2016-07-21
We evaluate the sensitivity of simulated turbine-height wind speeds to 26 parameters within the Mellor–Yamada–Nakanishi–Niino (MYNN) planetary boundary-layer scheme and MM5 surface-layer scheme of the Weather Research and Forecasting model over an area of complex terrain. An efficient sampling algorithm and generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of simulated turbine-height wind speeds. The results indicate that most of the variability in the ensemble simulations is due to parameters related to the dissipation of turbulent kinetic energy (TKE), Prandtl number, turbulent length scales, surface roughness, and the von Kármán constant. Themore » parameter associated with the TKE dissipation rate is found to be most important, and a larger dissipation rate produces larger hub-height wind speeds. A larger Prandtl number results in smaller nighttime wind speeds. Increasing surface roughness reduces the frequencies of both extremely weak and strong airflows, implying a reduction in the variability of wind speed. All of the above parameters significantly affect the vertical profiles of wind speed and the magnitude of wind shear. Lastly, the relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Ben; Qian, Yun; Berg, Larry K.
We evaluate the sensitivity of simulated turbine-height wind speeds to 26 parameters within the Mellor–Yamada–Nakanishi–Niino (MYNN) planetary boundary-layer scheme and MM5 surface-layer scheme of the Weather Research and Forecasting model over an area of complex terrain. An efficient sampling algorithm and generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of simulated turbine-height wind speeds. The results indicate that most of the variability in the ensemble simulations is due to parameters related to the dissipation of turbulent kinetic energy (TKE), Prandtl number, turbulent length scales, surface roughness, and the von Kármán constant. Themore » parameter associated with the TKE dissipation rate is found to be most important, and a larger dissipation rate produces larger hub-height wind speeds. A larger Prandtl number results in smaller nighttime wind speeds. Increasing surface roughness reduces the frequencies of both extremely weak and strong airflows, implying a reduction in the variability of wind speed. All of the above parameters significantly affect the vertical profiles of wind speed and the magnitude of wind shear. Lastly, the relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability.« less
Design by Dragging: An Interface for Creative Forward and Inverse Design with Simulation Ensembles
Coffey, Dane; Lin, Chi-Lun; Erdman, Arthur G.; Keefe, Daniel F.
2014-01-01
We present an interface for exploring large design spaces as encountered in simulation-based engineering, design of visual effects, and other tasks that require tuning parameters of computationally-intensive simulations and visually evaluating results. The goal is to enable a style of design with simulations that feels as-direct-as-possible so users can concentrate on creative design tasks. The approach integrates forward design via direct manipulation of simulation inputs (e.g., geometric properties, applied forces) in the same visual space with inverse design via “tugging” and reshaping simulation outputs (e.g., scalar fields from finite element analysis (FEA) or computational fluid dynamics (CFD)). The interface includes algorithms for interpreting the intent of users’ drag operations relative to parameterized models, morphing arbitrary scalar fields output from FEA and CFD simulations, and in-place interactive ensemble visualization. The inverse design strategy can be extended to use multi-touch input in combination with an as-rigid-as-possible shape manipulation to support rich visual queries. The potential of this new design approach is confirmed via two applications: medical device engineering of a vacuum-assisted biopsy device and visual effects design using a physically based flame simulation. PMID:24051845
NASA Astrophysics Data System (ADS)
Kandouci, Chahinaz; Djebbari, Ali
2018-04-01
A new family of two-dimensional optical hybrid code which employs zero cross-correlation (ZCC) codes, constructed by the balanced incomplete block design BIBD, as both time-spreading and wavelength hopping patterns are used in this paper. The obtained codes have both off-peak autocorrelation and cross-correlation values respectively equal to zero and unity. The work in this paper is a computer experiment performed using Optisystem 9.0 software program as a simulator to determine the wavelength hopping/time spreading (WH/TS) OCDMA system performances limitations. Five system parameters were considered in this work: the optical fiber length (transmission distance), the bitrate, the chip spacing and the transmitted power. This paper shows for what sufficient system performance parameters (BER≤10-9, Q≥6) the system can stand for.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belosi, Maria F.; Fogliata, Antonella, E-mail: antonella.fogliata-cozzi@eoc.ch, E-mail: afc@iosi.ch; Cozzi, Luca
2014-05-15
Purpose: Phase-space files for Monte Carlo simulation of the Varian TrueBeam beams have been made available by Varian. The aim of this study is to evaluate the accuracy of the distributed phase-space files for flattening filter free (FFF) beams, against experimental measurements from ten TrueBeam Linacs. Methods: The phase-space files have been used as input in PRIMO, a recently released Monte Carlo program based on thePENELOPE code. Simulations of 6 and 10 MV FFF were computed in a virtual water phantom for field sizes 3 × 3, 6 × 6, and 10 × 10 cm{sup 2} using 1 × 1more » × 1 mm{sup 3} voxels and for 20 × 20 and 40 × 40 cm{sup 2} with 2 × 2 × 2 mm{sup 3} voxels. The particles contained in the initial phase-space files were transported downstream to a plane just above the phantom surface, where a subsequent phase-space file was tallied. Particles were transported downstream this second phase-space file to the water phantom. Experimental data consisted of depth doses and profiles at five different depths acquired at SSD = 100 cm (seven datasets) and SSD = 90 cm (three datasets). Simulations and experimental data were compared in terms of dose difference. Gamma analysis was also performed using 1%, 1 mm and 2%, 2 mm criteria of dose-difference and distance-to-agreement, respectively. Additionally, the parameters characterizing the dose profiles of unflattened beams were evaluated for both measurements and simulations. Results: Analysis of depth dose curves showed that dose differences increased with increasing field size and depth; this effect might be partly motivated due to an underestimation of the primary beam energy used to compute the phase-space files. Average dose differences reached 1% for the largest field size. Lateral profiles presented dose differences well within 1% for fields up to 20 × 20 cm{sup 2}, while the discrepancy increased toward 2% in the 40 × 40 cm{sup 2} cases. Gamma analysis resulted in an agreement of 100% when a 2%, 2 mm criterion was used, with the only exception of the 40 × 40 cm{sup 2} field (∼95% agreement). With the more stringent criteria of 1%, 1 mm, the agreement reduced to almost 95% for field sizes up to 10 × 10 cm{sup 2}, worse for larger fields. Unflatness and slope FFF-specific parameters are in line with the possible energy underestimation of the simulated results relative to experimental data. Conclusions: The agreement between Monte Carlo simulations and experimental data proved that the evaluated Varian phase-space files for FFF beams from TrueBeam can be used as radiation sources for accurate Monte Carlo dose estimation, especially for field sizes up to 10 × 10 cm{sup 2}, that is the range of field sizes mostly used in combination to the FFF, high dose rate beams.« less
Wang, Chunhao; Yin, Fang-Fang; Kirkpatrick, John P; Chang, Zheng
2017-08-01
To investigate the feasibility of using undersampled k-space data and an iterative image reconstruction method with total generalized variation penalty in the quantitative pharmacokinetic analysis for clinical brain dynamic contrast-enhanced magnetic resonance imaging. Eight brain dynamic contrast-enhanced magnetic resonance imaging scans were retrospectively studied. Two k-space sparse sampling strategies were designed to achieve a simulated image acquisition acceleration factor of 4. They are (1) a golden ratio-optimized 32-ray radial sampling profile and (2) a Cartesian-based random sampling profile with spatiotemporal-regularized sampling density constraints. The undersampled data were reconstructed to yield images using the investigated reconstruction technique. In quantitative pharmacokinetic analysis on a voxel-by-voxel basis, the rate constant K trans in the extended Tofts model and blood flow F B and blood volume V B from the 2-compartment exchange model were analyzed. Finally, the quantitative pharmacokinetic parameters calculated from the undersampled data were compared with the corresponding calculated values from the fully sampled data. To quantify each parameter's accuracy calculated using the undersampled data, error in volume mean, total relative error, and cross-correlation were calculated. The pharmacokinetic parameter maps generated from the undersampled data appeared comparable to the ones generated from the original full sampling data. Within the region of interest, most derived error in volume mean values in the region of interest was about 5% or lower, and the average error in volume mean of all parameter maps generated through either sampling strategy was about 3.54%. The average total relative error value of all parameter maps in region of interest was about 0.115, and the average cross-correlation of all parameter maps in region of interest was about 0.962. All investigated pharmacokinetic parameters had no significant differences between the result from original data and the reduced sampling data. With sparsely sampled k-space data in simulation of accelerated acquisition by a factor of 4, the investigated dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic parameters can accurately estimate the total generalized variation-based iterative image reconstruction method for reliable clinical application.
Synchronization and chaotic dynamics of coupled mechanical metronomes
NASA Astrophysics Data System (ADS)
Ulrichs, Henning; Mann, Andreas; Parlitz, Ulrich
2009-12-01
Synchronization scenarios of coupled mechanical metronomes are studied by means of numerical simulations showing the onset of synchronization for two, three, and 100 globally coupled metronomes in terms of Arnol'd tongues in parameter space and a Kuramoto transition as a function of coupling strength. Furthermore, we study the dynamics of metronomes where overturning is possible. In this case hyperchaotic dynamics associated with some diffusion process in configuration space is observed, indicating the potential complexity of metronome dynamics.
NASA Astrophysics Data System (ADS)
Spangehl, Thomas; Schröder, Marc; Bodas-Salcedo, Alejandro; Glowienka-Hense, Rita; Hense, Andreas; Hollmann, Rainer; Dietzsch, Felix
2017-04-01
Decadal climate predictions are commonly evaluated focusing on geophysical parameters such as temperature, precipitation or wind speed using observational datasets and reanalysis. Alternatively, satellite based radiance measurements combined with satellite simulator techniques to deduce virtual satellite observations from the numerical model simulations can be used. The latter approach enables an evaluation in the instrument's parameter space and has the potential to reduce uncertainties on the reference side. Here we present evaluation methods focusing on forward operator techniques for the Special Sensor Microwave Imager (SSM/I). The simulator is developed as an integrated part of the CFMIP Observation Simulator Package (COSP). On the observational side the SSM/I and SSMIS Fundamental Climate Data Record (FCDR) released by CM SAF (http://dx.doi.org/10.5676/EUM_SAF_CM/FCDR_MWI/V002) is used, which provides brightness temperatures for different channels and covers the period from 1987 to 2013. The simulator is applied to hindcast simulations performed within the MiKlip project (http://fona-miklip.de) which is funded by the BMBF (Federal Ministry of Education and Research in Germany). Probabilistic evaluation results are shown based on a subset of the hindcast simulations covering the observational period.
Jalaleddini, Kian; Tehrani, Ehsan Sobhani; Kearney, Robert E
2017-06-01
The purpose of this paper is to present a structural decomposition subspace (SDSS) method for decomposition of the joint torque to intrinsic, reflexive, and voluntary torques and identification of joint dynamic stiffness. First, it formulates a novel state-space representation for the joint dynamic stiffness modeled by a parallel-cascade structure with a concise parameter set that provides a direct link between the state-space representation matrices and the parallel-cascade parameters. Second, it presents a subspace method for the identification of the new state-space model that involves two steps: 1) the decomposition of the intrinsic and reflex pathways and 2) the identification of an impulse response model of the intrinsic pathway and a Hammerstein model of the reflex pathway. Extensive simulation studies demonstrate that SDSS has significant performance advantages over some other methods. Thus, SDSS was more robust under high noise conditions, converging where others failed; it was more accurate, giving estimates with lower bias and random errors. The method also worked well in practice and yielded high-quality estimates of intrinsic and reflex stiffnesses when applied to experimental data at three muscle activation levels. The simulation and experimental results demonstrate that SDSS accurately decomposes the intrinsic and reflex torques and provides accurate estimates of physiologically meaningful parameters. SDSS will be a valuable tool for studying joint stiffness under functionally important conditions. It has important clinical implications for the diagnosis, assessment, objective quantification, and monitoring of neuromuscular diseases that change the muscle tone.
Emulation: A fast stochastic Bayesian method to eliminate model space
NASA Astrophysics Data System (ADS)
Roberts, Alan; Hobbs, Richard; Goldstein, Michael
2010-05-01
Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much more tightly constrain the input model space for a deterministic inversion or MCMC method. By using this technique jointly on several datasets (specifically seismic, gravity, and magnetotelluric (MT) describing the same region), we can include in our modelling uncertainties in the data measurements, the relationships between the various physical parameters involved, as well as the model representation uncertainty, and at the same time further reduce the range of plausible models to several percent of the original model space. Being stochastic in nature, the output posterior parameter distributions also allow our understanding of/beliefs about a geological region can be objectively updated, with full assessment of uncertainties, and so the emulator is also an inversion-type tool in it's own right, with the advantage (as with any Bayesian method) that our uncertainties from all sources (both data and model) can be fully evaluated.
Fast State-Space Methods for Inferring Dendritic Synaptic Connectivity
2013-08-08
the results of 100 simulations with the same parameters as in Figures 4 and 5. As expected, the LARS/LARS+ results are (downward) biased and have low...with a strength slightly biased toward lower values. To measure the variability of the results across the 20 simulations , we computed for each...are downward biased and have low variance, and the OLS results are unbiased but have high variance. Note that for LARS+ the values above the median are
Nonequilibrium Phase Transitions in Supercooled Water
NASA Astrophysics Data System (ADS)
Limmer, David; Chandler, David
2012-02-01
We present results of a simulation study of water driven out of equilibrium. Using transition path sampling, we can probe stationary path distributions parameterize by order parameters that are extensive in space and time. We find that by coupling external fields to these parameters, we can drive water through a first order dynamical phase transition into amorphous ice. By varying the initial equilibrium distributions we can probe pathways for the creation of amorphous ices of low and high densities.
Modeling space-time correlations of velocity fluctuations in wind farms
NASA Astrophysics Data System (ADS)
Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael
2018-07-01
An analytical model for the streamwise velocity space-time correlations in turbulent flows is derived and applied to the special case of velocity fluctuations in large wind farms. The model is based on the Kraichnan-Tennekes random sweeping hypothesis, capturing the decorrelation in time while including a mean wind velocity in the streamwise direction. In the resulting model, the streamwise velocity space-time correlation is expressed as a convolution of the pure space correlation with an analytical temporal decorrelation kernel. Hence, the spatio-temporal structure of velocity fluctuations in wind farms can be derived from the spatial correlations only. We then explore the applicability of the model to predict spatio-temporal correlations in turbulent flows in wind farms. Comparisons of the model with data from a large eddy simulation of flow in a large, spatially periodic wind farm are performed, where needed model parameters such as spatial and temporal integral scales and spatial correlations are determined from the large eddy simulation. Good agreement is obtained between the model and large eddy simulation data showing that spatial data may be used to model the full temporal structure of fluctuations in wind farms.
NASA Technical Reports Server (NTRS)
Randol, Brent M.; Christian, Eric R.
2016-01-01
A parametric study is performed using the electrostatic simulations of Randol and Christian (2014) in which the number density, n, and initial thermal speed, theta, are varied. The range of parameters covers an extremely broad plasma regime, all the way from the very weak coupling of space plasmas to the very strong coupling of solid plasmas. The first result is that simulations at the same Lambda(sub D), where Lambda(sub D) is the plasma coupling parameter, but at different combinations of n and theta, behave exactly the same. As a function of Lambda(sub D), the form of p(v), the velocity distribution function of v, the magnitude of v, the velocity vector, is studied. For intermediate to high D, heating is observed in p(v) that obeys conservation of energy, and a suprathermal tail is formed, with a spectral index that depends on Lambda(sub D). For strong coupling (Lambda(sub D) much > 1), the form of the tail is v5, consistent with the findings of Randol and Christian (2014). For weak coupling (Lambda(sub D much <1), no acceleration or heating occurs, as there is no free energy. The dependence on N, the number of particles in the simulation, is also explored. There is a subtle dependence in the index of the tail, such that v5 appears to be the N approaches infinity limit.
NASA Astrophysics Data System (ADS)
Randol, Brent M.; Christian, Eric R.
2016-03-01
A parametric study is performed using the electrostatic simulations of Randol and Christian in which the number density, n, and initial thermal speed, θ, are varied. The range of parameters covers an extremely broad plasma regime, all the way from the very weak coupling of space plasmas to the very strong coupling of solid plasmas. The first result is that simulations at the same ΓD, where ΓD (∝ n1/3θ-2) is the plasma coupling parameter, but at different combinations of n and θ, behave exactly the same. As a function of ΓD, the form of p(v), the velocity distribution function of v, the magnitude of v, the velocity vector, is studied. For intermediate to high ΓD, heating is observed in p(v) that obeys conservation of energy, and a suprathermal tail is formed, with a spectral index that depends on ΓD. For strong coupling (ΓD≫1), the form of the tail is v-5, consistent with the findings of Randol and Christian). For weak coupling (ΓD≪1), no acceleration or heating occurs, as there is no free energy. The dependence on N, the number of particles in the simulation, is also explored. There is a subtle dependence in the index of the tail, such that v-5 appears to be the N→∞ limit.
NASA Astrophysics Data System (ADS)
Gonthier, Peter L.; Koh, Yew-Meng; Kust Harding, Alice
2016-04-01
We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and gamma-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of thirteen radio surveys as well as the MSP birth rate in the Galaxy and the number of MSPs detected by Fermi. We explore various high-energy emission geometries like the slot gap, outer gap, two pole caustic and pair starved polar cap models. The parameters associated with the birth distributions for the mass accretion rate, magnetic field, and period distributions are well constrained. With the set of four free parameters, we employ Markov Chain Monte Carlo simulations to explore the model parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and gamma-ray pulsar characteristics. We estimate the contribution of MSPs to the diffuse gamma-ray background with a special focus on the Galactic Center.We express our gratitude for the generous support of the National Science Foundation (RUI: AST-1009731), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program (NNX09AQ71G).
Space time modelling of air quality for environmental-risk maps: A case study in South Portugal
NASA Astrophysics Data System (ADS)
Soares, Amilcar; Pereira, Maria J.
2007-10-01
Since the 1960s, there has been a strong industrial development in the Sines area, on the southern Atlantic coast of Portugal, including the construction of an important industrial harbour and of, mainly, petrochemical and energy-related industries. These industries are, nowadays, responsible for substantial emissions of SO2, NOx, particles, VOCs and part of the ozone polluting the atmosphere. The major industries are spatially concentrated in a restricted area, very close to populated areas and natural resources such as those protected by the European Natura 2000 network. Air quality parameters are measured at the emissions' sources and at a few monitoring stations. Although air quality parameters are measured on an hourly basis, the lack of representativeness in space of these non-homogeneous phenomena makes even their representativeness in time questionable. Hence, in this study, the regional spatial dispersion of contaminants is also evaluated, using diffusive-sampler (Radiello Passive Sampler) campaigns during given periods. Diffusive samplers cover the entire space extensively, but just for a limited period of time. In the first step of this study, a space-time model of pollutants was built, based on a stochastic simulation-direct sequential simulation-with local spatial trend. The spatial dispersion of the contaminants for a given period of time-corresponding to the exposure time of the diffusive samplers-was computed by ordinary kriging. Direct sequential simulation was applied to produce equiprobable spatial maps for each day of that period, using the kriged map as a spatial trend and the daily measurements of pollutants from the monitoring stations as hard data. In the second step, the following environmental risk and costs maps were computed from the set of simulated realizations of pollutants: (i) maps of the contribution of each emission to the pollutant concentration at any spatial location; (ii) costs of badly located monitoring stations.
Automated Knowledge Discovery From Simulators
NASA Technical Reports Server (NTRS)
Burl, Michael; DeCoste, Dennis; Mazzoni, Dominic; Scharenbroich, Lucas; Enke, Brian; Merline, William
2007-01-01
A computational method, SimLearn, has been devised to facilitate efficient knowledge discovery from simulators. Simulators are complex computer programs used in science and engineering to model diverse phenomena such as fluid flow, gravitational interactions, coupled mechanical systems, and nuclear, chemical, and biological processes. SimLearn uses active-learning techniques to efficiently address the "landscape characterization problem." In particular, SimLearn tries to determine which regions in "input space" lead to a given output from the simulator, where "input space" refers to an abstraction of all the variables going into the simulator, e.g., initial conditions, parameters, and interaction equations. Landscape characterization can be viewed as an attempt to invert the forward mapping of the simulator and recover the inputs that produce a particular output. Given that a single simulation run can take days or weeks to complete even on a large computing cluster, SimLearn attempts to reduce costs by reducing the number of simulations needed to effect discoveries. Unlike conventional data-mining methods that are applied to static predefined datasets, SimLearn involves an iterative process in which a most informative dataset is constructed dynamically by using the simulator as an oracle. On each iteration, the algorithm models the knowledge it has gained through previous simulation trials and then chooses which simulation trials to run next. Running these trials through the simulator produces new data in the form of input-output pairs. The overall process is embodied in an algorithm that combines support vector machines (SVMs) with active learning. SVMs use learning from examples (the examples are the input-output pairs generated by running the simulator) and a principle called maximum margin to derive predictors that generalize well to new inputs. In SimLearn, the SVM plays the role of modeling the knowledge that has been gained through previous simulation trials. Active learning is used to determine which new input points would be most informative if their output were known. The selected input points are run through the simulator to generate new information that can be used to refine the SVM. The process is then repeated. SimLearn carefully balances exploration (semi-randomly searching around the input space) versus exploitation (using the current state of knowledge to conduct a tightly focused search). During each iteration, SimLearn uses not one, but an ensemble of SVMs. Each SVM in the ensemble is characterized by different hyper-parameters that control various aspects of the learned predictor - for example, whether the predictor is constrained to be very smooth (nearby points in input space lead to similar output predictions) or whether the predictor is allowed to be "bumpy." The various SVMs will have different preferences about which input points they would like to run through the simulator next. SimLearn includes a formal mechanism for balancing the ensemble SVM preferences so that a single choice can be made for the next set of trials.
[A dynamic model of the extravehicular (correction of extravehicuar) activity space suit].
Yang, Feng; Yuan, Xiu-gan
2002-12-01
Objective. To establish a dynamic model of the space suit base on the particular configuration of the space suit. Method. The mass of the space suit components, moment of inertia, mobility of the joints of space suit, as well as the suit-generated torques, were considered in this model. The expressions to calculate the moment of inertia were developed by simplifying the geometry of the space suit. A modified Preisach model was used to mathematically describe the hysteretic torque characteristics of joints in a pressurized space suit, and it was implemented numerically basing on the observed suit parameters. Result. A dynamic model considering mass, moment of inertia and suit-generated torques was established. Conclusion. This dynamic model provides some elements for the dynamic simulation of the astronaut extravehicular activity.
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
Experimental Study on the Perception Characteristics of Haptic Texture by Multidimensional Scaling.
Wu, Juan; Li, Na; Liu, Wei; Song, Guangming; Zhang, Jun
2015-01-01
Recent works regarding real texture perception demonstrate that physical factors such as stiffness and spatial period play a fundamental role in texture perception. This research used a multidimensional scaling (MDS) analysis to further characterize and quantify the effects of the simulation parameters on haptic texture rendering and perception. In a pilot experiment, 12 haptic texture samples were generated by using a 3-degrees-of-freedom (3-DOF) force-feedback device with varying spatial period, height, and stiffness coefficient parameter values. The subjects' perceptions of the virtual textures indicate that roughness, denseness, flatness and hardness are distinguishing characteristics of texture. In the main experiment, 19 participants rated the dissimilarities of the textures and estimated the magnitudes of their characteristics. The MDS method was used to recover the underlying perceptual space and reveal the significance of the space from the recorded data. The physical parameters and their combinations have significant effects on the perceptual characteristics. A regression model was used to quantitatively analyze the parameters and their effects on the perceptual characteristics. This paper is to illustrate that haptic texture perception based on force feedback can be modeled in two- or three-dimensional space and provide suggestions on improving perception-based haptic texture rendering.
GMC COLLISIONS AS TRIGGERS OF STAR FORMATION. I. PARAMETER SPACE EXPLORATION WITH 2D SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Benjamin; Loo, Sven Van; Tan, Jonathan C.
We utilize magnetohydrodynamic (MHD) simulations to develop a numerical model for giant molecular cloud (GMC)–GMC collisions between nearly magnetically critical clouds. The goal is to determine if, and under what circumstances, cloud collisions can cause pre-existing magnetically subcritical clumps to become supercritical and undergo gravitational collapse. We first develop and implement new photodissociation region based heating and cooling functions that span the atomic to molecular transition, creating a multiphase ISM and allowing modeling of non-equilibrium temperature structures. Then in 2D and with ideal MHD, we explore a wide parameter space of magnetic field strength, magnetic field geometry, collision velocity, andmore » impact parameter and compare isolated versus colliding clouds. We find factors of ∼2–3 increase in mean clump density from typical collisions, with strong dependence on collision velocity and magnetic field strength, but ultimately limited by flux-freezing in 2D geometries. For geometries enabling flow along magnetic field lines, greater degrees of collapse are seen. We discuss observational diagnostics of cloud collisions, focussing on {sup 13}CO(J = 2–1), {sup 13}CO(J = 3–2), and {sup 12}CO(J = 8–7) integrated intensity maps and spectra, which we synthesize from our simulation outputs. We find that the ratio of J = 8–7 to lower-J emission is a powerful diagnostic probe of GMC collisions.« less
Intrusion Detection for Defense at the MAC and Routing Layers of Wireless Networks
2007-01-01
Space DoS Denial of Service DSR Dynamic Source Routing IDS Intrusion Detection System LAR Location-Aided Routing MAC Media Access Control MACA Multiple...different mobility parameters. 10 They simulate interaction between three MAC protocols ( MACA , 802.11 and CSMA) and three routing protocols (AODV, DSR
Claycamp, H Gregg; Kona, Ravikanth; Fahmy, Raafat; Hoag, Stephen W
2016-04-01
Qualitative risk assessment methods are often used as the first step to determining design space boundaries; however, quantitative assessments of risk with respect to the design space, i.e., calculating the probability of failure for a given severity, are needed to fully characterize design space boundaries. Quantitative risk assessment methods in design and operational spaces are a significant aid to evaluating proposed design space boundaries. The goal of this paper is to demonstrate a relatively simple strategy for design space definition using a simplified Bayesian Monte Carlo simulation. This paper builds on a previous paper that used failure mode and effects analysis (FMEA) qualitative risk assessment and Plackett-Burman design of experiments to identity the critical quality attributes. The results show that the sequential use of qualitative and quantitative risk assessments can focus the design of experiments on a reduced set of critical material and process parameters that determine a robust design space under conditions of limited laboratory experimentation. This approach provides a strategy by which the degree of risk associated with each known parameter can be calculated and allocates resources in a manner that manages risk to an acceptable level.
Charging of the Van Allen Probes: Theory and Simulations
NASA Astrophysics Data System (ADS)
Delzanno, G. L.; Meierbachtol, C.; Svyatskiy, D.; Denton, M.
2017-12-01
The electrical charging of spacecraft has been a known problem since the beginning of the space age. Its consequences can vary from moderate (single event upsets) to catastrophic (total loss of the spacecraft) depending on a variety of causes, some of which could be related to the surrounding plasma environment, including emission processes from the spacecraft surface. Because of its complexity and cost, this problem is typically studied using numerical simulations. However, inherent unknowns in both plasma parameters and spacecraft material properties can lead to inaccurate predictions of overall spacecraft charging levels. The goal of this work is to identify and study the driving causes and necessary parameters for particular spacecraft charging events on the Van Allen Probes (VAP) spacecraft. This is achieved by making use of plasma theory, numerical simulations, and on-board data. First, we present a simple theoretical spacecraft charging model, which assumes a spherical spacecraft geometry and is based upon the classical orbital-motion-limited approximation. Some input parameters to the model (such as the warm plasma distribution function) are taken directly from on-board VAP data, while other parameters are either varied parametrically to assess their impact on the spacecraft potential, or constrained through spacecraft charging data and statistical techniques. Second, a fully self-consistent numerical simulation is performed by supplying these parameters to CPIC, a particle-in-cell code specifically designed for studying plasma-material interactions. CPIC simulations remove some of the assumptions of the theoretical model and also capture the influence of the full geometry of the spacecraft. The CPIC numerical simulation results will be presented and compared with on-board VAP data. This work will set the foundation for our eventual goal of importing the full plasma environment from the LANL-developed SHIELDS framework into CPIC, in order to more accurately predict spacecraft charging.
NASA Astrophysics Data System (ADS)
Hawkins, L. R.; Rupp, D. E.; Li, S.; Sarah, S.; McNeall, D. J.; Mote, P.; Betts, R. A.; Wallom, D.
2017-12-01
Changing regional patterns of surface temperature, precipitation, and humidity may cause ecosystem-scale changes in vegetation, altering the distribution of trees, shrubs, and grasses. A changing vegetation distribution, in turn, alters the albedo, latent heat flux, and carbon exchanged with the atmosphere with resulting feedbacks onto the regional climate. However, a wide range of earth-system processes that affect the carbon, energy, and hydrologic cycles occur at sub grid scales in climate models and must be parameterized. The appropriate parameter values in such parameterizations are often poorly constrained, leading to uncertainty in predictions of how the ecosystem will respond to changes in forcing. To better understand the sensitivity of regional climate to parameter selection and to improve regional climate and vegetation simulations, we used a large perturbed physics ensemble and a suite of statistical emulators. We dynamically downscaled a super-ensemble (multiple parameter sets and multiple initial conditions) of global climate simulations using a 25-km resolution regional climate model HadRM3p with the land-surface scheme MOSES2 and dynamic vegetation module TRIFFID. We simultaneously perturbed land surface parameters relating to the exchange of carbon, water, and energy between the land surface and atmosphere in a large super-ensemble of regional climate simulations over the western US. Statistical emulation was used as a computationally cost-effective tool to explore uncertainties in interactions. Regions of parameter space that did not satisfy observational constraints were eliminated and an ensemble of parameter sets that reduce regional biases and span a range of plausible interactions among earth system processes were selected. This study demonstrated that by combining super-ensemble simulations with statistical emulation, simulations of regional climate could be improved while simultaneously accounting for a range of plausible land-atmosphere feedback strengths.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
CME Arrival-time Validation of Real-time WSA-ENLIL+Cone Simulations at the CCMC/SWRC
NASA Astrophysics Data System (ADS)
Wold, A. M.; Mays, M. L.; Taktakishvili, A.; Jian, L.; Odstrcil, D.; MacNeice, P. J.
2016-12-01
The Wang-Sheeley-Arge (WSA)-ENLIL+Cone model is used extensively in space weather operations worldwide to model CME propagation, as such it is important to assess its performance. We present validation results of the WSA-ENLIL+Cone model installed at the Community Coordinated Modeling Center (CCMC) and executed in real-time by the CCMC/Space Weather Research Center (SWRC). The SWRC is a CCMC sub-team that provides space weather services to NASA robotic mission operators and science campaigns, and also prototypes new forecasting models and techniques. CCMC/SWRC uses the WSA-ENLIL+Cone model to predict CME arrivals at NASA missions throughout the inner heliosphere. In this work we compare model predicted CME arrival-times to in-situ ICME shock observations near Earth (ACE, Wind), STEREO-A and B for simulations completed between March 2010 - July 2016 (over 1500 runs). We report hit, miss, false alarm, and correct rejection statistics for all three spacecraft. For hits we compute the bias, RMSE, and average absolute CME arrival time error, and the dependence of these errors on CME input parameters. We compare the predicted geomagnetic storm strength (Kp index) to the CME arrival time error for Earth-directed CMEs. The predicted Kp index is computed using the WSA-ENLIL+Cone plasma parameters at Earth with a modified Newell et al. (2007) coupling function. We also explore the impact of the multi-spacecraft observations on the CME parameters used initialize the model by comparing model validation results before and after the STEREO-B communication loss (since September 2014) and STEREO-A side-lobe operations (August 2014-December 2015). This model validation exercise has significance for future space weather mission planning such as L5 missions.
Irreducible Tests for Space Mission Sequencing Software
NASA Technical Reports Server (NTRS)
Ferguson, Lisa
2012-01-01
As missions extend further into space, the modeling and simulation of their every action and instruction becomes critical. The greater the distance between Earth and the spacecraft, the smaller the window for communication becomes. Therefore, through modeling and simulating the planned operations, the most efficient sequence of commands can be sent to the spacecraft. The Space Mission Sequencing Software is being developed as the next generation of sequencing software to ensure the most efficient communication to interplanetary and deep space mission spacecraft. Aside from efficiency, the software also checks to make sure that communication during a specified time is even possible, meaning that there is not a planet or moon preventing reception of a signal from Earth or that two opposing commands are being given simultaneously. In this way, the software not only models the proposed instructions to the spacecraft, but also validates the commands as well.To ensure that all spacecraft communications are sequenced properly, a timeline is used to structure the data. The created timelines are immutable and once data is as-signed to a timeline, it shall never be deleted nor renamed. This is to prevent the need for storing and filing the timelines for use by other programs. Several types of timelines can be created to accommodate different types of communications (activities, measurements, commands, states, events). Each of these timeline types requires specific parameters and all have options for additional parameters if needed. With so many combinations of parameters available, the robustness and stability of the software is a necessity. Therefore a baseline must be established to ensure the full functionality of the software and it is here where the irreducible tests come into use.
Parametric Simulations of the Great Dark Spots of Neptune
NASA Astrophysics Data System (ADS)
Deng, Xiaolong; Le Beau, R.
2006-09-01
Observations by Voyager II and the Hubble Space Telescope of the Great Dark Spots (GDS) of Neptune suggest that large vortices with lifespans of years are not uncommon occurrences in the atmosphere of Neptune. The variability of these features over time, in particular the complex motions of GDS-89, make them challenging candidates to simulate in atmospheric models. Previously, using the Explicit Planetary Isentropic-Coordinate (EPIC) General Circulation Model, LeBeau and Dowling (1998) simulated the GDS-like vortex features. Qualitatively, the drift, oscillation, and tail-like features of GDS-89 were recreated, although precise numerical matches were only achieved for the meridional drift rate. In 2001, Stratman et al. applied EPIC to simulate the formation of bright companion clouds to the Great Dark Spots. In 2006, Dowling et al. presented a new version of EPIC, which includes hybrid vertical coordinate, cloud physics, advanced chemistry, and new turbulence models. With the new version of EPIC, more observation results, and more powerful computers, it is the time to revisit CFD simulations of the Neptune's atmosphere and do more detailed work on GDS-like vortices. In this presentation, we apply the new version of EPIC to simulate GDS-89. We test the influences of different parameters in the EPIC model: potential vorticity gradient, wind profile, initial latitude, vortex shape, and vertical structure. The observed motions, especially the latitudinal drift and oscillations in orientation angle and aspect ratio, are used as diagnostics of these unobserved atmospheric conditions. Increased computing power allows for more refined and longer simulations and greater coverage of the parameter space than previous efforts. Improved quantitative results have been achieved, including voritices with near eight-day oscillations and comparable variations in shape to GDS-89. This research has been supported by Kentucky NASA EPSCoR.
Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu
2015-02-12
The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.
Effects of waveform model systematics on the interpretation of GW150914
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.
2017-05-01
Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.
Investigation of Shapes and Spins of Reaccumulated Remnants from Asteroid Disruption Simulations
NASA Astrophysics Data System (ADS)
Michel, Patrick; Ballouz, R.; Richardson, D. C.; Schwartz, S. R.
2012-10-01
Evidence that asteroids larger than a few hundred meters diameter can be gravitational aggregates of smaller, cohesive pieces comes, for instance, from images returned by the Hayabusa spacecraft of asteroid 25143 Itokawa (Fujiwara et al., 2006, Science 312, 1330). These images show an irregular 500-meter-long body with a boulder-strewn surface, as might be expected from reaccumulation following catastrophic disruption of a larger parent asteroid (Michel et al., 2001, Science 294, 1696). However, numerical simulations of this process to date essentially focus on the size/mass and velocity distributions of reaccumulated fragments, matching asteroid families. Reaccumulation was simplified by merging the objects into growing spheres. However, understanding shapes, spins and surface properties of gravitational aggregates formed by reaccumulation is required to interpret information from ground-based observations and space missions. E.g., do boulders on Itokawa originate from reaccumulation of material ejected from a catastrophic impact or from other processes (such as the Brazil-nut effect)? How does reaccumulation affect the observed shapes? A model was developed (Richardson et al., 2009, Planet. Space Sci. 57, 183) to preserve shape and spin information of reaccumulated bodies in simulations of asteroid disruption, by allowing fragments to stick on contact (and optionally bounce or fragment further, depending on user-selectable parameters). Such treatments are computationally expensive, and we could only recently start to explore the parameter space. Preliminary results will be presented, showing that some observed surface and shape features may be explained by how fragments produced by a disruption reaccumulate. Simulations of rubble pile collisions without particle cohesion, and an investigation of the influence of initial target rotation on the outcome will also be shown. We acknowledge the National Science Foundation (AST1009579) and NASA (NNX08AM39G).
Theoretical accuracy in cosmological growth estimation
NASA Astrophysics Data System (ADS)
Bose, Benjamin; Koyama, Kazuya; Hellwing, Wojciech A.; Zhao, Gong-Bo; Winther, Hans A.
2017-07-01
We elucidate the importance of the consistent treatment of gravity-model specific nonlinearities when estimating the growth of cosmological structures from redshift space distortions (RSD). Within the context of standard perturbation theory (SPT), we compare the predictions of two theoretical templates with redshift space data from COLA (comoving Lagrangian acceleration) simulations in the normal branch of DGP gravity (nDGP) and general relativity (GR). Using COLA for these comparisons is validated using a suite of full N-body simulations for the same theories. The two theoretical templates correspond to the standard general relativistic perturbation equations and those same equations modeled within nDGP. Gravitational clustering nonlinear effects are accounted for by modeling the power spectrum up to one-loop order and redshift space clustering anisotropy is modeled using the Taruya, Nishimichi and Saito (TNS) RSD model. Using this approach, we attempt to recover the simulation's fiducial logarithmic growth parameter f . By assigning the simulation data with errors representing an idealized survey with a volume of 10 Gpc3/h3 , we find the GR template is unable to recover fiducial f to within 1 σ at z =1 when we match the data up to kmax=0.195 h /Mpc . On the other hand, the DGP template recovers the fiducial value within 1 σ . Further, we conduct the same analysis for sets of mock data generated for generalized models of modified gravity using SPT, where again we analyze the GR template's ability to recover the fiducial value. We find that for models with enhanced gravitational nonlinearity, the theoretical bias of the GR template becomes significant for stage IV surveys. Thus, we show that for the future large data volume galaxy surveys, the self-consistent modeling of non-GR gravity scenarios will be crucial in constraining theory parameters.
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; ...
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less
Emulation for probabilistic weather forecasting
NASA Astrophysics Data System (ADS)
Cornford, Dan; Barillec, Remi
2010-05-01
Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.
An advanced analysis method of initial orbit determination with too short arc data
NASA Astrophysics Data System (ADS)
Li, Binzhe; Fang, Li
2018-02-01
This paper studies the initial orbit determination (IOD) based on space-based angle measurement. Commonly, these space-based observations have short durations. As a result, classical initial orbit determination algorithms give poor results, such as Laplace methods and Gauss methods. In this paper, an advanced analysis method of initial orbit determination is developed for space-based observations. The admissible region and triangulation are introduced in the method. Genetic algorithm is also used for adding some constraints of parameters. Simulation results show that the algorithm can successfully complete the initial orbit determination.
Evaluation of soft rubber goods. [for use as O-rings, and seals on space shuttle
NASA Technical Reports Server (NTRS)
Merz, P. L.
1974-01-01
The performance of rubber goods suitable for use as O-rings, seals, gaskets, bladders and diaphragms under conditions simulating those of the space shuttle were studied. High reliability throughout the 100 flight missions planned for the space shuttle was considered of overriding importance. Accordingly, in addition to a rank ordering of the selected candidate materials based on prolonged fluid compatibility and sealability behavior, basic rheological parameters (such as cyclic hysteresis, stress relaxation, indicated modulus, etc.) were determined to develop methods capable of predicting the cumulative effect of these multiple reuse cycles.
On the breakdown modes and parameter space of Ohmic Tokamak startup
NASA Astrophysics Data System (ADS)
Peng, Yanli; Jiang, Wei; Zhang, Ya; Hu, Xiwei; Zhuang, Ge; Innocenti, Maria; Lapenta, Giovanni
2017-10-01
Tokamak plasma has to be hot. The process of turning the initial dilute neutral hydrogen gas at room temperature into fully ionized plasma is called tokamak startup. Even with over 40 years of research, the parameter ranges for the successful startup still aren't determined by numerical simulations but by trial and errors. However, in recent years it has drawn much attention due to one of the challenges faced by ITER: the maximum electric field for startup can't exceed 0.3 V/m, which makes the parameter range for successful startup narrower. Besides, this physical mechanism is far from being understood either theoretically or numerically. In this work, we have simulated the plasma breakdown phase driven by pure Ohmic heating using a particle-in-cell/Monte Carlo code, with the aim of giving a predictive parameter range for most tokamaks, even for ITER. We have found three situations during the discharge, as a function of the initial parameters: no breakdown, breakdown and runaway. Moreover, breakdown delay and volt-second consumption under different initial conditions are evaluated. In addition, we have simulated breakdown on ITER and confirmed that when the electric field is 0.3 V/m, the optimal pre-filling pressure is 0.001 Pa, which is in good agreement with ITER's design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten
2016-06-08
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less
NASA Astrophysics Data System (ADS)
Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang
2016-06-01
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
NASA Astrophysics Data System (ADS)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.
2018-03-01
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...
2018-02-09
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
Design and landing dynamic analysis of reusable landing leg for a near-space manned capsule
NASA Astrophysics Data System (ADS)
Yue, Shuai; Nie, Hong; Zhang, Ming; Wei, Xiaohui; Gan, Shengyong
2018-06-01
To improve the landing performance of a near-space manned capsule under various landing conditions, a novel landing system is designed that employs double chamber and single chamber dampers in the primary and auxiliary struts, respectively. A dynamic model of the landing system is established, and the damper parameters are determined by employing the design method. A single-leg drop test with different initial pitch angles is then conducted to compare and validate the simulation model. Based on the validated simulation model, seven critical landing conditions regarding nine crucial landing responses are found by combining the radial basis function (RBF) surrogate model and adaptive simulated annealing (ASA) optimization method. Subsequently, the adaptability of the landing system under critical landing conditions is analyzed. The results show that the simulation effectively results match the test results, which validates the accuracy of the dynamic model. In addition, all of the crucial responses under their corresponding critical landing conditions satisfy the design specifications, demonstrating the feasibility of the landing system.
NASA Astrophysics Data System (ADS)
Vanderka, Ales; Hajek, Lukas; Bednarek, Lukas; Latal, Jan; Vitasek, Jan; Hejduk, Stanislav; Vasinek, Vladimir
2016-09-01
In this article the author's team deals with using Wavelength Division Multiplexing (WDM) for Free Space Optical (FSO) Communications. In FSO communication occurs due to the influence of atmospheric effect (attenuation, and fluctuation of the received power signal, influence turbulence) and the WDM channel suffers from interchannel crosstalk. There is considered only the one direction. The behavior FSO link was tested for one or eight channels. Here we will be dealing with modulation schemes OOK (On-Off keying), QAM (Quadrature Amplitude Modulation) and Subcarrier Intensity Modulation (SIM) based on a BPSK (Binary Phase Shift Keying). Simulation software OptiSystem 14 was used for tasting. For simulation some parameters were set according to real FSO link such as the datarate 1.25 Gbps, link range 1.4 km. Simulated FSO link used wavelength of 1550 nm with 0.8 nm spacing. There is obtained the influence of crosstalk and modulation format for the BER, depending on the amount of turbulence in the propagation medium.
NASA Technical Reports Server (NTRS)
Lanfranco, M. J.; Sparks, V. W.; Kavanaugh, A. T.
1973-01-01
An experimental investigation was conducted in a 9- by 7-foot supersonic wind tunnel to determine the effect of plume-induced flow separation and aspiration effects due to operation of both the orbiter and the solid rocket motors on a 0.019-scale model of the launch configuration of the space shuttle vehicle. Longitudinal and lateral-directional stability data were obtained at Mach numbers of 1.6, 2.0, and 2.2 with and without the engines operating. The plumes exiting from the engines were simulated by a cold gas jet supplied by an auxiliary 200 atmosphere air supply system, and by solid body plume simulators. Comparisons of the aerodynamic effects produced by these two simulation procedures are presented. The data indicate that the parameters most significantly affected by the jet plumes are the pitching moment, the elevon control effectiveness, the axial force, and the orbiter wing loads.
Evaluating the material parameters of the human cornea in a numerical model.
Sródka, Wiesław
2011-01-01
The values of the biomechanical human eyeball model parameters reported in the literature are still being disputed. The primary motivation behind this work was to predict the material parameters of the cornea through numerical simulations and to assess the applicability of the ubiquitously accepted law of applanation tonometry - the Imbert-Fick equation. Numerical simulations of a few states of eyeball loading were run to determine the stroma material parameters. In the computations, the elasticity moduli of the material were related to the stress sign, instead of the orientation in space. Stroma elasticity secant modulus E was predicted to be close to 0.3 MPa. The numerically simulated applanation tonometer readings for the cornea with the calibration dimensions were found to be lower by 11 mmHg then IOP = 48 mmHg. This discrepancy is the result of a strictly mechanical phenomenon taking place in the tensioned and simultaneously flattened corneal shell and is not related to the tonometer measuring accuracy. The observed deviation has not been amenable to any GAT corrections, contradicting the Imbert-Fick law. This means a new approach to the calculation of corrections for GAT readings is needed.
PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Shiyuan; Huang, Jianhua Z.; Long, James
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequencymore » parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.« less
NASA Astrophysics Data System (ADS)
Bai, Xiaoyan; Chen, Chen; Li, Hong; Liu, Wandong; Chen, Wei
2017-10-01
Scaling relations of the main parameters of a needle-like electron beam plasma (EBP) to the initial beam energy, beam current, and discharge pressures are presented. The relations characterize the main features of the plasma in three parameter space and can provide great convenience in plasma design with electron beams. First, starting from the self-similar behavior of electron beam propagation, energy and charge depositions in beam propagation were expressed analytically as functions of the three parameters. Second, according to the complete coupled theoretical model of an EBP and appropriate assumptions, independent equations controlling the density and space charges were derived. Analytical expressions for the density and charges versus functions of energy and charge depositions were obtained. Finally, with the combination of the expressions derived in the above two steps, scaling relations of the density and potential to the three parameters were constructed. Meanwhile, numerical simulations were used to test part of the scaling relations.
On the theory of multi-pulse vibro-impact mechanisms
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Metrikin, V. S.; Nikiforova, I. V.; Ipatov, A. A.
2017-11-01
This paper presents a mathematical model of a new multi-striker eccentric shock-vibration mechanism with a crank-sliding bar vibration exciter and an arbitrary number of pistons. Analytical solutions for the parameters of the model are obtained to determine the regions of existence of stable periodic motions. Under the assumption of an absolutely inelastic collision of the piston, we derive equations that single out a bifurcational unattainable boundary in the parameter space, which has a countable number of arbitrarily complex stable periodic motions in its neighbourhood. We present results of numerical simulations, which illustrate the existence of periodic and stochastic motions. The methods proposed in this paper for investigating the dynamical characteristics of the new crank-type conrod mechanisms allow practitioners to indicate regions in the parameter space, which allow tuning these mechanisms into the most efficient periodic mode of operation, and to effectively analyze the main changes in their operational regimes when the system parameters are changed.
Precision Attitude Determination for an Infrared Space Telescope
NASA Technical Reports Server (NTRS)
Benford, Dominic J.
2008-01-01
We have developed performance simulations for a precision attitude determination system using a focal plane star tracker on an infrared space telescope. The telescope is being designed for the Destiny mission to measure cosmologically distant supernovae as one of the candidate implementations for the Joint Dark Energy Mission. Repeat observations of the supernovae require attitude control at the level of 0.010 arcseconds (0.05 microradians) during integrations and at repeat intervals up to and over a year. While absolute accuracy is not required, the repoint precision is challenging. We have simulated the performance of a focal plane star tracker in a multidimensional parameter space, including pixel size, read noise, and readout rate. Systematic errors such as proper motion, velocity aberration, and parallax can be measured and compensated out. Our prediction is that a relative attitude determination accuracy of 0.001 to 0.002 arcseconds (0.005 to 0.010 microradians) will be achievable.
Multi-objective optimisation and decision-making of space station logistics strategies
NASA Astrophysics Data System (ADS)
Zhu, Yue-he; Luo, Ya-zhong
2016-10-01
Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.
NASA Technical Reports Server (NTRS)
Allen, R. W.; Jex, H. R.
1972-01-01
In order to test various components of a regenerative life support system and to obtain data on the physiological and psychological effects of long-duration exposure to confinement in a space station atmosphere, four carefully screened young men were sealed in space station simulator for 90 days. A tracking test battery was administered during the above experiment. The battery included a clinical test (critical instability task) related to the subject's dynamic time delay, and a conventional steady tracking task, during which dynamic response (describing functions) and performance measures were obtained. Good correlation was noted between the clinical critical instability scores and more detailed tracking parameters such as dynamic time delay and gain-crossover frequency. The comprehensive data base on human operator tracking behavior obtained in this study demonstrate that sophisticated visual-motor response properties can be efficiently and reliably measured over extended periods of time.
Mai, Xiaofeng; Liu, Jie; Wu, Xiong; Zhang, Qun; Guo, Changjian; Yang, Yanfu; Li, Zhaohui
2017-02-06
A Stokes-space modulation format classification (MFC) technique is proposed for coherent optical receivers by using a non-iterative clustering algorithm. In the clustering algorithm, two simple parameters are calculated to help find the density peaks of the data points in Stokes space and no iteration is required. Correct MFC can be realized in numerical simulations among PM-QPSK, PM-8QAM, PM-16QAM, PM-32QAM and PM-64QAM signals within practical optical signal-to-noise ratio (OSNR) ranges. The performance of the proposed MFC algorithm is also compared with those of other schemes based on clustering algorithms. The simulation results show that good classification performance can be achieved using the proposed MFC scheme with moderate time complexity. Proof-of-concept experiments are finally implemented to demonstrate MFC among PM-QPSK/16QAM/64QAM signals, which confirm the feasibility of our proposed MFC scheme.
Attitude determination for high-accuracy submicroradian jitter pointing on space-based platforms
NASA Astrophysics Data System (ADS)
Gupta, Avanindra A.; van Houten, Charles N.; Germann, Lawrence M.
1990-10-01
A description of the requirement definition process is given for a new wideband attitude determination subsystem (ADS) for image motion compensation (IMC) systems. The subsystem consists of either lateral accelerometers functioning in differential pairs or gas-bearing gyros for high-frequency sensors using CCD-based star trackers for low-frequency sensors. To minimize error the sensor signals are combined so that the mixing filter does not allow phase distortion. The two ADS models are introduced in an IMC simulation to predict measurement error, correction capability, and residual image jitter for a variety of system parameters. The IMC three-axis testbed is utilized to simulate an incoming beam in inertial space. Results demonstrate that both mechanical and electronic IMC meet the requirements of image stabilization for space-based observation at submicroradian-jitter levels. Currently available technology may be employed to implement IMC systems.
Random Walk Quantum Clustering Algorithm Based on Space
NASA Astrophysics Data System (ADS)
Xiao, Shufen; Dong, Yumin; Ma, Hongyang
2018-01-01
In the random quantum walk, which is a quantum simulation of the classical walk, data points interacted when selecting the appropriate walk strategy by taking advantage of quantum-entanglement features; thus, the results obtained when the quantum walk is used are different from those when the classical walk is adopted. A new quantum walk clustering algorithm based on space is proposed by applying the quantum walk to clustering analysis. In this algorithm, data points are viewed as walking participants, and similar data points are clustered using the walk function in the pay-off matrix according to a certain rule. The walk process is simplified by implementing a space-combining rule. The proposed algorithm is validated by a simulation test and is proved superior to existing clustering algorithms, namely, Kmeans, PCA + Kmeans, and LDA-Km. The effects of some of the parameters in the proposed algorithm on its performance are also analyzed and discussed. Specific suggestions are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirocha, Jeff D.; Simpson, Matthew D.; Fast, Jerome D.
Simulations of two periods featuring three consecutive low level jet (LLJ) events in the US Upper Great Plains during the autumn of 2011 were conducted to explore the impacts of various setup configurations and physical process models on simulated flow parameters within the lowest 200 m above the surface, using the Weather Research and Forecasting (WRF) model. Sensitivities of simulated flow parameters to the horizontal and vertical grid spacing, planetary boundary layer (PBL) and land surface model (LSM) physics options, were assessed. Data from a Light Detection and Ranging (lidar) system, deployed to the Weather Forecast Improvement Project (WFIP; Finleymore » et al. 2013) were used to evaluate the accuracy of simulated wind speed and direction at 80 m above the surface, as well as their vertical distributions between 120 and 40 m, covering the typical span of contemporary tall wind turbines. All of the simulations qualitatively captured the overall diurnal cycle of wind speed and stratification, producing LLJs during each overnight period, however large discrepancies occurred at certain times for each simulation in relation to the observations. 54-member ensembles encompassing changes of the above discussed configuration parameters displayed a wide range of simulated vertical distributions of wind speed and direction, and potential temperature, reflecting highly variable representations of stratification during the weakly stable overnight conditions. Root mean square error (RMSE) statistics show that different ensemble members performed better and worse in various simulated parameters at different times, with no clearly superior configuration . Simulations using a PBL parameterization designed specifically for the stable conditions investigated herein provided superior overall simulations of wind speed at 80 m, demonstrating the efficacy of targeting improvements of physical process models in areas of known deficiencies. However, the considerable magnitudes of the RMSE values of even the best performing simulations indicate ample opportunities for further improvements.« less
NASA Technical Reports Server (NTRS)
Salmasi, A. B. (Editor); Springett, J. C.; Sumida, J. T.; Richter, P. H.
1984-01-01
The design and implementation of the Land Mobile Satellite Service (LMSS) channel simulator as a facility for an end to end hardware simulation of the LMSS communications links, primarily with the mobile terminal is described. A number of studies are reported which show the applications of the channel simulator as a facility for validation and assessment of the LMSS design requirements and capabilities by performing quantitative measurements and qualitative audio evaluations for various link design parameters and channel impairments under simulated LMSS operating conditions. As a first application, the LMSS channel simulator was used in the evaluation of a system based on the voice processing and modulation (e.g., NBFM with 30 kHz of channel spacing and a 2 kHz rms frequency deviation for average talkers) selected for the Bell System's Advanced Mobile Phone Service (AMPS). The various details of the hardware design, qualitative audio evaluation techniques, signal to channel impairment measurement techniques, the justifications for criteria of different parameter selection in regards to the voice processing and modulation methods, and the results of a number of parametric studies are further described.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Brown, G.; Rikvold, P. A.
2017-05-01
A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.
NASA Astrophysics Data System (ADS)
Zheng, Guo; Wang, Jue; Wang, Lin; Zhou, Muchun; Chen, Yanru; Song, Minmin
2018-03-01
The scintillation index of pseudo-Bessel-Gaussian Schell-mode (PBGSM) beams propagating through atmospheric turbulence is analyzed with the help of wave optics simulation due to the analytic difficulties. It is found that in the strong fluctuation regime, the PBGSM beams are more resistant to the turbulence with the appropriate parameters β and δ . However, the case is contrary in the weak fluctuation regime. Our simulation results indicate that the PBGSM beams may be applied to free-space optical (FSO) communication systems only when the turbulence is strong or the propagation distance is long.
Real-time failure control (SAFD)
NASA Technical Reports Server (NTRS)
Panossian, Hagop V.; Kemp, Victoria R.; Eckerling, Sherry J.
1990-01-01
The Real Time Failure Control program involves development of a failure detection algorithm, referred as System for Failure and Anomaly Detection (SAFD), for the Space Shuttle Main Engine (SSME). This failure detection approach is signal-based and it entails monitoring SSME measurement signals based on predetermined and computed mean values and standard deviations. Twenty four engine measurements are included in the algorithm and provisions are made to add more parameters if needed. Six major sections of research are presented: (1) SAFD algorithm development; (2) SAFD simulations; (3) Digital Transient Model failure simulation; (4) closed-loop simulation; (5) SAFD current limitations; and (6) enhancements planned for.
Periodic sequence of stabilized wave segments in an excitable medium
NASA Astrophysics Data System (ADS)
Zykov, V. S.; Bodenschatz, E.
2018-03-01
Numerical computations show that a stabilization of a periodic sequence of wave segments propagating through an excitable medium is possible only in a restricted domain within the parameter space. By application of a free-boundary approach, we demonstrate that at the boundary of this domain the parameter H introduced in our Rapid Communication is constant. We show also that the discovered parameter predetermines the propagation velocity and the shape of the wave segments. The predictions of the free-boundary approach are in good quantitative agreement with results from numerical reaction-diffusion simulations performed on the modified FitzHugh-Nagumo model.
Well-tempered metadynamics: a smoothly-converging and tunable free-energy method
NASA Astrophysics Data System (ADS)
Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele
2008-03-01
We present [1] a method for determining the free energy dependence on a selected number of order parameters using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevantregions of the order parameter space. The algorithm is tested on the reconstruction of alanine dipeptide free energy landscape. [1] A. Barducci, G. Bussi and M. Parrinello, Phys. Rev. Lett., accepted (2007).
Arisholm, Gunnar
2007-05-14
Group velocity mismatch (GVM) is a major concern in the design of optical parametric amplifiers (OPAs) and generators (OPGs) for pulses shorter than a few picoseconds. By simplifying the coupled propagation equations and exploiting their scaling properties, the number of free parameters for a collinear OPA is reduced to a level where the parameter space can be studied systematically by simulations. The resulting set of figures show the combinations of material parameters and pulse lengths for which high performance can be achieved, and they can serve as a basis for a design.
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
Virtual Plant Tissue: Building Blocks for Next-Generation Plant Growth Simulation
De Vos, Dirk; Dzhurakhalov, Abdiravuf; Stijven, Sean; Klosiewicz, Przemyslaw; Beemster, Gerrit T. S.; Broeckhove, Jan
2017-01-01
Motivation: Computational modeling of plant developmental processes is becoming increasingly important. Cellular resolution plant tissue simulators have been developed, yet they are typically describing physiological processes in an isolated way, strongly delimited in space and time. Results: With plant systems biology moving toward an integrative perspective on development we have built the Virtual Plant Tissue (VPTissue) package to couple functional modules or models in the same framework and across different frameworks. Multiple levels of model integration and coordination enable combining existing and new models from different sources, with diverse options in terms of input/output. Besides the core simulator the toolset also comprises a tissue editor for manipulating tissue geometry and cell, wall, and node attributes in an interactive manner. A parameter exploration tool is available to study parameter dependence of simulation results by distributing calculations over multiple systems. Availability: Virtual Plant Tissue is available as open source (EUPL license) on Bitbucket (https://bitbucket.org/vptissue/vptissue). The project has a website https://vptissue.bitbucket.io. PMID:28523006
Distributed database kriging for adaptive sampling (D²KAS)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; ...
2015-03-18
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S
2014-12-09
Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.
Inversion of surface parameters using fast learning neural networks
NASA Technical Reports Server (NTRS)
Dawson, M. S.; Olvera, J.; Fung, A. K.; Manry, M. T.
1992-01-01
A neural network approach to the inversion of surface scattering parameters is presented. Simulated data sets based on a surface scattering model are used so that the data may be viewed as taken from a completely known randomly rough surface. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) are tested on the simulated backscattering data. The RMS error of training the FL network is found to be less than one half the error of the BP network while requiring one to two orders of magnitude less CPU time. When applied to inversion of parameters from a statistically rough surface, the FL method is successful at recovering the surface permittivity, the surface correlation length, and the RMS surface height in less time and with less error than the BP network. Further applications of the FL neural network to the inversion of parameters from backscatter measurements of an inhomogeneous layer above a half space are shown.
Genetic algorithms for the application of Activated Sludge Model No. 1.
Kim, S; Lee, H; Kim, J; Kim, C; Ko, J; Woo, H; Kim, S
2002-01-01
The genetic algorithm (GA) has been integrated into the IWA ASM No. 1 to calibrate important stoichiometric and kinetic parameters. The evolutionary feature of GA was used to configure the multiple local optima as well as the global optimum. The objective function of optimization was designed to minimize the difference between estimated and measured effluent concentrations at the activated sludge system. Both steady state and dynamic data of the simulation benchmark were used for calibration using denitrification layout. Depending upon the confidence intervals and objective functions, the proposed method provided distributions of parameter space. Field data have been collected and applied to validate calibration capacity of GA. Dynamic calibration was suggested to capture periodic variations of inflow concentrations. Also, in order to verify this proposed method in real wastewater treatment plant, measured data sets for substrate concentrations were obtained from Haeundae wastewater treatment plant and used to estimate parameters in the dynamic system. The simulation results with calibrated parameters matched well with the observed concentrations of effluent COD.
NASA Technical Reports Server (NTRS)
Momoh, James A.; Wang, Yanchun; Dolce, James L.
1997-01-01
This paper describes the application of neural network adaptive wavelets for fault diagnosis of space station power system. The method combines wavelet transform with neural network by incorporating daughter wavelets into weights. Therefore, the wavelet transform and neural network training procedure become one stage, which avoids the complex computation of wavelet parameters and makes the procedure more straightforward. The simulation results show that the proposed method is very efficient for the identification of fault locations.
NASA Technical Reports Server (NTRS)
Brown, N. E.
1973-01-01
Parameters that require consideration by the planners and designers when planning for man to perform functions outside the vehicle are presented in terms of the impact the extravehicular crewmen and major EV equipment items have on the mission, vehicle, and payload. Summary data on man's performance capabilities in the weightless space environment are also provided. The performance data are based on orbital and transearth EVA from previous space flight programs and earthbound simulations, such as water immersion and zero-g aircraft.
The Automated Primate Research Laboratory (APRL)
NASA Technical Reports Server (NTRS)
Pace, N.; Smith, G. D.
1972-01-01
A description is given of a self-contained automated primate research laboratory to study the effects of weightlessness on subhuman primates. Physiological parameters such as hemodynamics, respiration, blood constituents, waste, and diet and nutrition are analyzed for abnormalities in the simulated space environment. The Southeast Asian pig-tailed monkey (Macaca nemistrina) was selected for the experiments owing to its relative intelligence and learning capacity. The objective of the program is to demonstrate the feasibility of a man-tended primate space flight experiment.
NASA Astrophysics Data System (ADS)
Balin Talamba, D.; Higy, C.; Joerin, C.; Musy, A.
The paper presents an application concerning the hydrological modelling for the Haute-Mentue catchment, located in western Switzerland. A simplified version of Topmodel, developed in a Labview programming environment, was applied in the aim of modelling the hydrological processes on this catchment. Previous researches car- ried out in this region outlined the importance of the environmental tracers in studying the hydrological behaviour and an important knowledge has been accumulated dur- ing this period concerning the mechanisms responsible for runoff generation. In con- formity with the theoretical constraints, Topmodel was applied for an Haute-Mentue sub-catchment where tracing experiments showed constantly low contributions of the soil water during the flood events. The model was applied for two humid periods in 1998. First, the model calibration was done in order to provide the best estimations for the total runoff. Instead, the simulated components (groundwater and rapid flow) showed far deviations from the reality indicated by the tracing experiments. Thus, a new calibration was performed including additional information given by the environ- mental tracing. The calibration of the model was done by using simulated annealing (SA) techniques, which are easy to implement and statistically allow for converging to a global minimum. The only problem is that the method is time and computer consum- ing. To improve this, a version of SA was used which is known as very fast-simulated annealing (VFSA). The principles are the same as for the SA technique. The random search is guided by certain probability distribution and the acceptance criterion is the same as for SA but the VFSA allows for better taking into account the ranges of vari- ation of each parameter. Practice with Topmodel showed that the energy function has different sensitivities along different dimensions of the parameter space. The VFSA algorithm allows differentiated search in relation with the sensitivity of the param- eters. The environmental tracing was used in the aim of constraining the parameter space in order to better simulate the hydrological behaviour of the catchment. VFSA outlined issues for characterising the significance of Topmodel input parameters as well as their uncertainty for the hydrological modelling.
Thermal modeling of a cryogenic turbopump for space shuttle applications.
NASA Technical Reports Server (NTRS)
Knowles, P. J.
1971-01-01
Thermal modeling of a cryogenic pump and a hot-gas turbine in a turbopump assembly proposed for the Space Shuttle is described in this paper. A model, developed by identifying the heat-transfer regimes and incorporating their dependencies into a turbopump system model, included heat transfer for two-phase cryogen, hot-gas (200 R) impingement on turbine blades, gas impingement on rotating disks and parallel plate fluid flow. The ?thermal analyzer' program employed to develop this model was the TRW Systems Improved Numerical Differencing Analyzer (SINDA). This program uses finite differencing with lumped parameter representation for each node. Also discussed are model development, simulations of turbopump startup/shutdown operations, and the effects of varying turbopump parameters on the thermal performance.
NASA Technical Reports Server (NTRS)
Poberezhskiy, Ilya Y; Chang, Daniel H.; Erlig, Herman
2011-01-01
Optical metrology system reliability during a prolonged space mission is often limited by the reliability of pump laser diodes. We developed a metrology laser pump module architecture that meets NASA SIM Lite instrument optical power and reliability requirements by combining the outputs of multiple single-mode pump diodes in a low-loss, high port count fiber coupler. We describe Monte-Carlo simulations used to calculate the reliability of the laser pump module and introduce a combined laser farm aging parameter that serves as a load-sharing optimization metric. Employing these tools, we select pump module architecture, operating conditions, biasing approach and perform parameter sensitivity studies to investigate the robustness of the obtained solution.
Fei, Ding-Yu; Zhao, Xiaoming; Boanca, Cosmin; Hughes, Esther; Bai, Ou; Merrell, Ronald; Rafiq, Azhar
2010-07-01
To design and test an embedded biomedical sensor system that can monitor astronauts' comprehensive physiological parameters, and provide real-time data display during extra-vehicle activities (EVA) in the space exploration. An embedded system was developed with an array of biomedical sensors that can be integrated into the spacesuit. Wired communications were tested for physiological data acquisition and data transmission to a computer mounted on the spacesuit during task performances simulating EVA sessions. The sensor integration, data collection and communication, and the real-time data monitoring were successfully validated in the NASA field tests. The developed system may work as an embedded system for monitoring health status during long-term space mission. Copyright 2010 Elsevier Ltd. All rights reserved.
Vortex topology of rolling and pitching wings
NASA Astrophysics Data System (ADS)
Johnson, Kyle; Thurow, Brian; Wabick, Kevin; Buchholz, James; Berdon, Randall
2017-11-01
A flat, rectangular plate with an aspect ratio of 2 was articulated in roll and pitch, individually and simultaneously, to isolate the effects of each motion. The plate was immersed into a Re = 10,000 flow (based on chord length) to simulate forward, flapping flight. Measurements were made using a 3D-3C plenoptic PIV system to allow for the study of vortex topology in the instantaneous flow, in addition to phase-averaged results. The prominent focus is leading-edge vortex (LEV) stability and the lifespan of shed LEVs. The parameter space involves multiple values of advance coefficient J and reduced frequency k for roll and pitch, respectively. This space aims to determine the influence of each parameter on LEVs, which has been identified as an important factor for the lift enhancement seen in flapping wing flight. A variety of results are to be presented characterizing the variations in vortex topology across this parameter space. This work is supported by the Air Force Office of Scientific Research (Grant Number FA9550-16-1-0107, Dr. Douglas Smith, program manager).
A Generalized Simple Formulation of Convective Adjustment ...
Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la
Extracting galactic structure parameters from multivariated density estimation
NASA Technical Reports Server (NTRS)
Chen, B.; Creze, M.; Robin, A.; Bienayme, O.
1992-01-01
Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.
Implementation and simulations of the sphere solution in FAST
NASA Astrophysics Data System (ADS)
Murgolo, F. P.; Schirone, M. G.; Lattanzi, M.; Bernacca, P. L.
1989-06-01
The details of the implementation of the sphere solution software in the Fundamental Astronomy by Space Techniques (FAST) consortium, are described. The simulation results for realistic data sets, both with and without grid-step errors are given. Expected errors on the astrometric parameters of the primary stars and the precision of the reference great circle zero points, are provided as a function of mission duration. The design matrix, the diagrams of the context processor and the processors experimental results are given.
A hyperbolastic type-I diffusion process: Parameter estimation by means of the firefly algorithm.
Barrera, Antonio; Román-Román, Patricia; Torres-Ruiz, Francisco
2018-01-01
A stochastic diffusion process, whose mean function is a hyperbolastic curve of type I, is presented. The main characteristics of the process are studied and the problem of maximum likelihood estimation for the parameters of the process is considered. To this end, the firefly metaheuristic optimization algorithm is applied after bounding the parametric space by a stagewise procedure. Some examples based on simulated sample paths and real data illustrate this development. Copyright © 2017 Elsevier B.V. All rights reserved.
Fellinger, Michael R.; Hector, Jr., Louis G.; Trinkle, Dallas R.
2016-11-29
Here, we present computed datasets on changes in the lattice parameter and elastic stiffness coefficients of BCC Fe due to substitutional Al, B, Cu, Mn, and Si solutes, and octahedral interstitial C and N solutes. The data is calculated using the methodology based on density functional theory (DFT). All the DFT calculations were performed using the Vienna Ab initio Simulations Package (VASP). The data is stored in the NIST dSpace repository.
Using sobol sequences for planning computer experiments
NASA Astrophysics Data System (ADS)
Statnikov, I. N.; Firsov, G. I.
2017-12-01
Discusses the use for research of problems of multicriteria synthesis of dynamic systems method of Planning LP-search (PLP-search), which not only allows on the basis of the simulation model experiments to revise the parameter space within specified ranges of their change, but also through special randomized nature of the planning of these experiments is to apply a quantitative statistical evaluation of influence of change of varied parameters and their pairwise combinations to analyze properties of the dynamic system.Start your abstract here...
NASA Technical Reports Server (NTRS)
Coggi, J. V.; Loscutoff, A. V.; Barker, R. S.
1973-01-01
An analytical simulation of the RITE-Integrated Waste Management and Water Recovery System using radioisotopes for thermal energy was prepared for the NASA-Manned Space Flight Center (MSFC). The RITE system is the most advanced concept water-waste management system currently under development and has undergone extended duration testing. It has the capability of disposing of nearly all spacecraft wastes including feces and trash and of recovering water from usual waste water sources: urine, condensate, wash water, etc. All of the process heat normally used in the system is produced from low penalty radioisotope heat sources. The analytical simulation was developed with the G189A computer program. The objective of the simulation was to obtain an analytical simulation which can be used to (1) evaluate the current RITE system steady state and transient performance during normal operating conditions, and also during off normal operating conditions including failure modes; and (2) evaluate the effects of variations in component design parameters and vehicle interface parameters on system performance.
3D Simulation of Multiple Simultaneous Hydraulic Fractures with Different Initial Lengths in Rock
NASA Astrophysics Data System (ADS)
Tang, X.; Rayudu, N. M.; Singh, G.
2017-12-01
Hydraulic fracturing is widely used technique for extracting shale gas. During this process, fractures with various initial lengths are induced in rock mass with hydraulic pressure. Understanding the mechanism of propagation and interaction between these induced hydraulic cracks is critical for optimizing the fracking process. In this work, numerical results are presented for investigating the effect of in-situ parameters and fluid properties on growth and interaction of multi simultaneous hydraulic fractures. A fully coupled 3D fracture simulator, TOUGH- GFEM is used for simulating the effect of different vital parameters, including in-situ stress, initial fracture length, fracture spacing, fluid viscosity and flow rate on induced hydraulic fractures growth. This TOUGH-GFEM simulator is based on 3D finite volume method (FVM) and partition of unity element method (PUM). Displacement correlation method (DCM) is used for calculating multi - mode (Mode I, II, III) stress intensity factors. Maximum principal stress criteria is used for crack propagation. Key words: hydraulic fracturing, TOUGH, partition of unity element method , displacement correlation method, 3D fracturing simulator
Beta Dips in the Gaia Era: Simulation Predictions of the Galactic Velocity Anisotropy Parameter (β)
NASA Astrophysics Data System (ADS)
Loebman, Sarah; Valluri, Monica; Hattori, Kohei; Debattista, Victor P.; Bell, Eric F.; Stinson, Greg; Christensen, Charlotte; Brooks, Alyson; Quinn, Thomas R.; Governato, Fabio
2017-01-01
Milky Way (MW) science has entered a new era with the advent of Gaia. Combined with spectroscopic survey data, we have newfound access to full 6D phase space information for halo stars. Such data provides an invaluable opportunity to assess kinematic trends as a function of radius and confront simulations with these observations to draw insight about our merger history. I will discuss predictions for the velocity anisotropy parameter, β, drawn from three suites of state-of-the-art cosmological N-body and N-body+SPH MW-like simulations. On average, all three suites predict a monotonically increasing value of β that is radially biased, and beyond 10 kpc, β > 0.5. I will also discuss β as a function of time for individual simulated galaxies. I will highlight when "dips" in β form, the severity (the rarity of β < 0), origin (in situ versus accreted halo), and persistence of these dips. Thereby, I present a cohesive set of predictions of β from simulations for comparison to forthcoming observations.
NASA Astrophysics Data System (ADS)
Choi, Jin-Ho; Seo, Kyong-Hwan
2017-06-01
This work seeks to find the most effective parameters in a deep convection scheme (relaxed Arakawa-Schubert scheme) of the National Centers of Environmental Prediction Climate Forecast System model for improved simulation of the Madden-Julian Oscillation (MJO). A suite of sensitivity experiments are performed by changing physical components such as the relaxation parameter of mass flux for adjustment of the environment, the evaporation rate from large-scale precipitation, the moisture trigger threshold using relative humidity of the boundary layer, and the fraction of re-evaporation of convective (subgrid-scale) rainfall. Among them, the last two parameters are found to produce a significant improvement. Increasing the strength of these two parameters reduces light rainfall that inhibits complete formation of the tropical convective system or supplies more moisture that help increase a potential energy to large-scale environment in the lower troposphere (especially at 700 hPa), leading to moisture preconditioning favorable for further development and eastward propagation of the MJO. In a more humid environment, more organized MJO structure (i.e., space-time spectral signal, eastward propagation, and tilted vertical structure) is produced.
Impacts of a Stochastic Ice Mass-Size Relationship on Squall Line Ensemble Simulations
NASA Astrophysics Data System (ADS)
Stanford, M.; Varble, A.; Morrison, H.; Grabowski, W.; McFarquhar, G. M.; Wu, W.
2017-12-01
Cloud and precipitation structure, evolution, and cloud radiative forcing of simulated mesoscale convective systems (MCSs) are significantly impacted by ice microphysics parameterizations. Most microphysics schemes assume power law relationships with constant parameters for ice particle mass, area, and terminal fallspeed relationships as a function of size, despite observations showing that these relationships vary in both time and space. To account for such natural variability, a stochastic representation of ice microphysical parameters was developed using the Predicted Particle Properties (P3) microphysics scheme in the Weather Research and Forecasting model, guided by in situ aircraft measurements from a number of field campaigns. Here, the stochastic framework is applied to the "a" and "b" parameters of the unrimed ice mass-size (m-D) relationship (m=aDb) with co-varying "a" and "b" values constrained by observational distributions tested over a range of spatiotemporal autocorrelation scales. Diagnostically altering a-b pairs in three-dimensional (3D) simulations of the 20 May 2011 Midlatitude Continental Convective Clouds Experiment (MC3E) squall line suggests that these parameters impact many important characteristics of the simulated squall line, including reflectivity structure (particularly in the anvil region), surface rain rates, surface and top of atmosphere radiative fluxes, buoyancy and latent cooling distributions, and system propagation speed. The stochastic a-b P3 scheme is tested using two frameworks: (1) a large ensemble of two-dimensional idealized squall line simulations and (2) a smaller ensemble of 3D simulations of the 20 May 2011 squall line, for which simulations are evaluated using observed radar reflectivity and radial velocity at multiple wavelengths, surface meteorology, and surface and satellite measured longwave and shortwave radiative fluxes. Ensemble spreads are characterized and compared against initial condition ensemble spreads for a range of variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrivastava, Manish; Zhao, Chun; Easter, Richard C.
We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recentmore » work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance. This study highlights the large sensitivity of SOA loadings to the particle-phase transformation of SOA volatility, which is neglected in most previous models.« less
Realtime Space Weather Forecasts Via Android Phone App
NASA Astrophysics Data System (ADS)
Crowley, G.; Haacke, B.; Reynolds, A.
2010-12-01
For the past several years, ASTRA has run a first-principles global 3-D fully coupled thermosphere-ionosphere model in real-time for space weather applications. The model is the Thermosphere-Ionosphere Mesosphere Electrodynamics General Circulation Model (TIMEGCM). ASTRA also runs the Assimilative Mapping of Ionospheric Electrodynamics (AMIE) in real-time. Using AMIE to drive the high latitude inputs to the TIMEGCM produces high fidelity simulations of the global thermosphere and ionosphere. These simulations can be viewed on the Android Phone App developed by ASTRA. The SpaceWeather app for the Android operating system is free and can be downloaded from the Google Marketplace. We present the current status of realtime thermosphere-ionosphere space-weather forcasting and discuss the way forward. We explore some of the issues in maintaining real-time simulations with assimilative data feeds in a quasi-operational setting. We also discuss some of the challenges of presenting large amounts of data on a smartphone. The ASTRA SpaceWeather app includes the broadest and most unique range of space weather data yet to be found on a single smartphone app. This is a one-stop-shop for space weather and the only app where you can get access to ASTRA’s real-time predictions of the global thermosphere and ionosphere, high latitude convection and geomagnetic activity. Because of the phone's GPS capability, users can obtain location specific vertical profiles of electron density, temperature, and time-histories of various parameters from the models. The SpaceWeather app has over 9000 downloads, 30 reviews, and a following of active users. It is clear that real-time space weather on smartphones is here to stay, and must be included in planning for any transition to operational space-weather use.
Moeller, Ralf; Cadet, Jean; Douki, Thierry; Mancinelli, Rocco L.; Nicholson, Wayne L.; Panitz, Corinna; Rabbow, Elke; Rettberg, Petra; Spry, Andrew; Stackebrandt, Erko; Vaishampayan, Parag; Venkateswaran, Kasthuri J.
2012-01-01
Abstract Spore-forming bacteria are of particular concern in the context of planetary protection because their tough endospores may withstand certain sterilization procedures as well as the harsh environments of outer space or planetary surfaces. To test their hardiness on a hypothetical mission to Mars, spores of Bacillus subtilis 168 and Bacillus pumilus SAFR-032 were exposed for 1.5 years to selected parameters of space in the experiment PROTECT during the EXPOSE-E mission on board the International Space Station. Mounted as dry layers on spacecraft-qualified aluminum coupons, the “trip to Mars” spores experienced space vacuum, cosmic and extraterrestrial solar radiation, and temperature fluctuations, whereas the “stay on Mars” spores were subjected to a simulated martian environment that included atmospheric pressure and composition, and UV and cosmic radiation. The survival of spores from both assays was determined after retrieval. It was clearly shown that solar extraterrestrial UV radiation (λ≥110 nm) as well as the martian UV spectrum (λ≥200 nm) was the most deleterious factor applied; in some samples only a few survivors were recovered from spores exposed in monolayers. Spores in multilayers survived better by several orders of magnitude. All other environmental parameters encountered by the “trip to Mars” or “stay on Mars” spores did little harm to the spores, which showed about 50% survival or more. The data demonstrate the high chance of survival of spores on a Mars mission, if protected against solar irradiation. These results will have implications for planetary protection considerations. Key Words: Planetary protection—Bacterial spores—Space experiment—Simulated Mars mission. Astrobiology 12, 445–456. PMID:22680691
Simulation and analysis of tape spring for deployed space structures
NASA Astrophysics Data System (ADS)
Chang, Wei; Cao, DongJing; Lian, MinLong
2018-03-01
The tape spring belongs to the configuration of ringent cylinder shell, and the mechanical properties of the structure are significantly affected by the change of geometrical parameters. There are few studies on the influence of geometrical parameters on the mechanical properties of the tape spring. The bending process of the single tape spring was simulated based on simulation software. The variations of critical moment, unfolding moment, and maximum strain energy in the bending process were investigated, and the effects of different radius angles of section and thickness and length on driving capability of the simple tape spring was studied by using these parameters. Results show that the driving capability and resisting disturbance capacity grow with the increase of radius angle of section in the bending process of the single tape spring. On the other hand, these capabilities decrease with increasing length of the single tape spring. In the end, the driving capability and resisting disturbance capacity grow with the increase of thickness in the bending process of the single tape spring. The research has a certain reference value for improving the kinematic accuracy and reliability of deployable structures.
Post-launch analysis of the deployment dynamics of a space web sounding rocket experiment
NASA Astrophysics Data System (ADS)
Mao, Huina; Sinn, Thomas; Vasile, Massimiliano; Tibert, Gunnar
2016-10-01
Lightweight deployable space webs have been proposed as platforms or frames for a construction of structures in space where centrifugal forces enable deployment and stabilization. The Suaineadh project was aimed to deploy a 2 × 2m2 space web by centrifugal forces in milli-gravity conditions and act as a test bed for the space web technology. Data from former sounding rocket experiments, ground tests and simulations were used to design the structure, the folding pattern and control parameters. A developed control law and a reaction wheel were used to control the deployment. After ejection from the rocket, the web was deployed but entanglements occurred since the web did not start to deploy at the specified angular velocity. The deployment dynamics was reconstructed from the information recorded in inertial measurement units and cameras. The nonlinear torque of the motor used to drive the reaction wheel was calculated from the results. Simulations show that if the Suaineadh started to deploy at the specified angular velocity, the web would most likely have been deployed and stabilized in space by the motor, reaction wheel and controller used in the experiment.
NASA Astrophysics Data System (ADS)
Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.
2015-12-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.
The Planetary and Space Simulation Facilities at DLR Cologne
NASA Astrophysics Data System (ADS)
Rabbow, Elke; Parpart, André; Reitz, Günther
2016-06-01
Astrobiology strives to increase our knowledge on the origin, evolution and distribution of life, on Earth and beyond. In the past centuries, life has been found on Earth in environments with extreme conditions that were expected to be uninhabitable. Scientific investigations of the underlying metabolic mechanisms and strategies that lead to the high adaptability of these extremophile organisms increase our understanding of evolution and distribution of life on Earth. Life as we know it depends on the availability of liquid water. Exposure of organisms to defined and complex extreme environmental conditions, in particular those that limit the water availability, allows the investigation of the survival mechanisms as well as an estimation of the possibility of the distribution to and survivability on other celestial bodies of selected organisms. Space missions in low Earth orbit (LEO) provide access for experiments to complex environmental conditions not available on Earth, but studies on the molecular and cellular mechanisms of adaption to these hostile conditions and on the limits of life cannot be performed exclusively in space experiments. Experimental space is limited and allows only the investigation of selected endpoints. An additional intensive ground based program is required, with easy to access facilities capable to simulate space and planetary environments, in particular with focus on temperature, pressure, atmospheric composition and short wavelength solar ultraviolet radiation (UV). DLR Cologne operates a number of Planetary and Space Simulation facilities (PSI) where microorganisms from extreme terrestrial environments or known for their high adaptability are exposed for mechanistic studies. Space or planetary parameters are simulated individually or in combination in temperature controlled vacuum facilities equipped with a variety of defined and calibrated irradiation sources. The PSI support basic research and were recurrently used for pre-flight test programs for several astrobiological space missions. Parallel experiments on ground provided essential complementary data supporting the scientific interpretation of the data received from the space missions.
Improved Parameter-Estimation With MRI-Constrained PET Kinetic Modeling: A Simulation Study
NASA Astrophysics Data System (ADS)
Erlandsson, Kjell; Liljeroth, Maria; Atkinson, David; Arridge, Simon; Ourselin, Sebastien; Hutton, Brian F.
2016-10-01
Kinetic analysis can be applied both to dynamic PET and dynamic contrast enhanced (DCE) MRI data. We have investigated the potential of MRI-constrained PET kinetic modeling using simulated [ 18F]2-FDG data for skeletal muscle. The volume of distribution, Ve, for the extra-vascular extra-cellular space (EES) is the link between the two models: It can be estimated by DCE-MRI, and then used to reduce the number of parameters to estimate in the PET model. We used a 3 tissue-compartment model with 5 rate constants (3TC5k), in order to distinguish between EES and the intra-cellular space (ICS). Time-activity curves were generated by simulation using the 3TC5k model for 3 different Ve values under basal and insulin stimulated conditions. Noise was added and the data were fitted with the 2TC3k model and with the 3TC5k model with and without Ve constraint. One hundred noise-realisations were generated at 4 different noise-levels. The results showed reductions in bias and variance with Ve constraint in the 3TC5k model. We calculated the parameter k3", representing the combined effect of glucose transport across the cellular membrane and phosphorylation, as an extra outcome measure. For k3", the average coefficient of variation was reduced from 52% to 9.7%, while for k3 in the standard 2TC3k model it was 3.4%. The accuracy of the parameters estimated with our new modeling approach depends on the accuracy of the assumed Ve value. In conclusion, we have shown that, by utilising information that could be obtained from DCE-MRI in the kinetic analysis of [ 18F]2-FDG-PET data, it is in principle possible to obtain better parameter estimates with a more complex model, which may provide additional information as compared to the standard model.
Changes in muscles accompanying non-weight-bearing and weightlessness
NASA Technical Reports Server (NTRS)
Tischler, M. E.; Henriksen, E. J.; Jaspers, S. R.; Jacob, S.; Kirby, C.
1989-01-01
Results of hindlimb suspension and space flight experiments with rats examine the effects of weightlessness simulation, weightlessness, and delay in postflight recovery of animals. Parameters examined were body mass, protein balance, amino acid metabolism, glucose and glycogen metabolism, and hormone levels. Tables show metabolic responses to unweighting of the soleus muscle.
A geostatistical extreme-value framework for fast simulation of natural hazard events
Stephenson, David B.
2016-01-01
We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements. PMID:27279768
Automated Knowledge Discovery from Simulators
NASA Technical Reports Server (NTRS)
Burl, Michael C.; DeCoste, D.; Enke, B. L.; Mazzoni, D.; Merline, W. J.; Scharenbroich, L.
2006-01-01
In this paper, we explore one aspect of knowledge discovery from simulators, the landscape characterization problem, where the aim is to identify regions in the input/ parameter/model space that lead to a particular output behavior. Large-scale numerical simulators are in widespread use by scientists and engineers across a range of government agencies, academia, and industry; in many cases, simulators provide the only means to examine processes that are infeasible or impossible to study otherwise. However, the cost of simulation studies can be quite high, both in terms of the time and computational resources required to conduct the trials and the manpower needed to sift through the resulting output. Thus, there is strong motivation to develop automated methods that enable more efficient knowledge extraction.
Parameter learning for performance adaptation
NASA Technical Reports Server (NTRS)
Peek, Mark D.; Antsaklis, Panos J.
1990-01-01
A parameter learning method is introduced and used to broaden the region of operability of the adaptive control system of a flexible space antenna. The learning system guides the selection of control parameters in a process leading to optimal system performance. A grid search procedure is used to estimate an initial set of parameter values. The optimization search procedure uses a variation of the Hooke and Jeeves multidimensional search algorithm. The method is applicable to any system where performance depends on a number of adjustable parameters. A mathematical model is not necessary, as the learning system can be used whenever the performance can be measured via simulation or experiment. The results of two experiments, the transient regulation and the command following experiment, are presented.
Switching LPV Control for High Performance Tactical Aircraft
NASA Technical Reports Server (NTRS)
Lu, Bei; Wu, Fen; Kim, SungWan
2004-01-01
This paper examines a switching Linear Parameter-Varying (LPV) control approach to determine if it is practical to use for flight control designs within a wide angle of attack region. The approach is based on multiple parameter-dependent Lyapunov functions. The full parameter space is partitioned into overlapping subspaces and a family of LPV controllers are designed, each suitable for a specific parameter subspace. The hysteresis switching logic is used to accomplish the transition among different parameter subspaces. The proposed switching LPV control scheme is applied to an F-16 aircraft model with different actuator dynamics in low and high angle of attack regions. The nonlinear simulation results show that the aircraft performs well when switching among different angle of attack regions.
Adaptive fuzzy controller for thermal comfort inside the air-conditioned automobile chamber
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, L.; Yu, B.; Chen, Z.
1999-07-01
In order to meet the passengers' demand for thermal comfort, the adaptive fuzzy logic control design methodology is applied for the automobile airconditioner system. In accordance with the theory of air flow and heat transfer, the air temperature field inside the airconditioned automobile chamber is simulated by a set of simplified half-empirical formula. Then, instead of PMV (Predicted Mean Vote) criterion, RIV (Real Individual Vote) criterion is adopted as the base of the control for passengers' thermal comfort. The proposed controller is applied to the air temperature regulation at the individual passenger position. The control procedure is based on partitioningmore » the state space of the system into cell-groups and fuzzily quantificating the state space into these cells. When the system model has some parameter perturbation, the controller can also adjust its control parameters to compensate for the perturbation and maintain the good performance. The learning procedure shows its ideal effect in both computer simulation and experiments. The final results demonstrate the ideal performance of this adaptive fuzzy controller.« less
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narasimha S.
2012-01-01
Simulation studies to optimize sensing of CO2 and O2 from space are described. Uncertainties in line-by-line calculations unaccounted for in previous studies identified. Multivariate methods are employed for measurement wavelengths selection. The Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) recommended by NRC Decadal Survey has a stringent accuracy requirements of 0.5% or better in XCO2 retrievals. NASA LaRC and its partners are investigating the use of the 1.57 m band of CO2 and the 1.26-1.27 m band of oxygen for XCO2 measurements. As part of these efforts, we are carrying out simulation studies using a lidar modeling framework being developed at NASA LaRC to predict the performance of our proposed ASCENDS mission implementation [1]. Our study is aimed at predicting the sources and magnitudes of errors anticipated in XCO2 retrievals for further error minimization through the selection of optimum excitation parameters and development of better retrieval methods.
Regulation of NF-κB oscillation by spatial parameters in true intracellular space (TiCS)
NASA Astrophysics Data System (ADS)
Ohshima, Daisuke; Sagara, Hiroshi; Ichikawa, Kazuhisa
2013-10-01
Transcription factor NF-κB is activated by cytokine stimulation, viral infection, or hypoxic environment leading to its translocation to the nucleus. The nuclear NF-κB is exported from the nucleus to the cytoplasm again, and by repetitive import and export, NF-κB shows damped oscillation with the period of 1.5-2.0 h. Oscillation pattern of NF-κB is thought to determine the gene expression profile. We published a report on a computational simulation for the oscillation of nuclear NF-κB in a 3D spherical cell, and showed the importance of spatial parameters such as diffusion coefficient and locus of translation for determining the oscillation pattern. Although the value of diffusion coefficient is inherent to protein species, its effective value can be modified by organelle crowding in intracellular space. Here we tested this possibility by computer simulation. The results indicate that the effective value of diffusion coefficient is significantly changed by the organelle crowding, and this alters the oscillation pattern of nuclear NF-κB.
Advances in Discrete-Event Simulation for MSL Command Validation
NASA Technical Reports Server (NTRS)
Patrikalakis, Alexander; O'Reilly, Taifun
2013-01-01
In the last five years, the discrete event simulator, SEQuence GENerator (SEQGEN), developed at the Jet Propulsion Laboratory to plan deep-space missions, has greatly increased uplink operations capacity to deal with increasingly complicated missions. In this paper, we describe how the Mars Science Laboratory (MSL) project makes full use of an interpreted environment to simulate change in more than fifty thousand flight software parameters and conditional command sequences to predict the result of executing a conditional branch in a command sequence, and enable the ability to warn users whenever one or more simulated spacecraft states change in an unexpected manner. Using these new SEQGEN features, operators plan more activities in one sol than ever before.
Some issues in the simulation of two-phase flows: The relative velocity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gräbel, J.; Hensel, S.; Ueberholz, P.
In this paper we compare numerical approximations for solving the Riemann problem for a hyperbolic two-phase flow model in two-dimensional space. The model is based on mixture parameters of state where the relative velocity between the two-phase systems is taken into account. This relative velocity appears as a main discontinuous flow variable through the complete wave structure and cannot be recovered correctly by some numerical techniques when simulating the associated Riemann problem. Simulations are validated by comparing the results of the numerical calculation qualitatively with OpenFOAM software. Simulations also indicate that OpenFOAM is unable to resolve the relative velocity associatedmore » with the Riemann problem.« less
A Real-time 3D Visualization of Global MHD Simulation for Space Weather Forecasting
NASA Astrophysics Data System (ADS)
Murata, K.; Matsuoka, D.; Kubo, T.; Shimazu, H.; Tanaka, T.; Fujita, S.; Watari, S.; Miyachi, H.; Yamamoto, K.; Kimura, E.; Ishikura, S.
2006-12-01
Recently, many satellites for communication networks and scientific observation are launched in the vicinity of the Earth (geo-space). The electromagnetic (EM) environments around the spacecraft are always influenced by the solar wind blowing from the Sun and induced electromagnetic fields. They occasionally cause various troubles or damages, such as electrification and interference, to the spacecraft. It is important to forecast the geo-space EM environment as well as the ground weather forecasting. Owing to the recent remarkable progresses of super-computer technologies, numerical simulations have become powerful research methods in the solar-terrestrial physics. For the necessity of space weather forecasting, NICT (National Institute of Information and Communications Technology) has developed a real-time global MHD simulation system of solar wind-magnetosphere-ionosphere couplings, which has been performed on a super-computer SX-6. The real-time solar wind parameters from the ACE spacecraft at every one minute are adopted as boundary conditions for the simulation. Simulation results (2-D plots) are updated every 1 minute on a NICT website. However, 3D visualization of simulation results is indispensable to forecast space weather more accurately. In the present study, we develop a real-time 3D webcite for the global MHD simulations. The 3-D visualization results of simulation results are updated every 20 minutes in the following three formats: (1)Streamlines of magnetic field lines, (2)Isosurface of temperature in the magnetosphere and (3)Isoline of conductivity and orthogonal plane of potential in the ionosphere. For the present study, we developed a 3-D viewer application working on Internet Explorer browser (ActiveX) is implemented, which was developed on the AVS/Express. Numerical data are saved in the HDF5 format data files every 1 minute. Users can easily search, retrieve and plot past simulation results (3D visualization data and numerical data) by using the STARS (Solar-terrestrial data Analysis and Reference System). The STARS is a data analysis system for satellite and ground-based observation data for solar-terrestrial physics.
Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior
NASA Technical Reports Server (NTRS)
Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.
2017-01-01
A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.
NASA Astrophysics Data System (ADS)
den, Mitsue; Amo, Hiroyoshi; Sugihara, Kohta; Takei, Toshifumi; Ogawa, Tomoya; Tanaka, Takashi; Watari, Shinichi
We describe prediction system of the 1-AU arrival times of interplanetary shock waves associated with coromal mass ejections (CMEs). The system is based on modeling of the shock propagation using a three-dimensional adaptive mesh refinement (AMR) code. Once a CME is observed by LASCO/SOHO, firstly ambient solar wind is obtained by numerical simulation, which reproduces the solar wind parameters at that time observed by ACE spacecraft. Then we input the expansion speed and occurrence position data of that CME as initial condtions for an CME model, and 3D simulation of the CME and the shock propagation is perfomed until the shock wave passes the 1-AU. Input the parameters, execution of simulation and output of the result are available on Web, so a person who is not familiar with operation of computer or simulations or is not a researcher can use this system to predict the shock passage time. Simulated CME and shock evolution is visuallized at the same time with simulation and snap shots appear on the web automatically, so that user can follow the propagation. This system is expected to be useful for forecasters of space weather. We will describe the system and simulation model in detail.
Disentangling Redshift-Space Distortions and Nonlinear Bias using the 2D Power Spectrum
Jennings, Elise; Wechsler, Risa H.
2015-08-07
We present the nonlinear 2D galaxy power spectrum, P(k, µ), in redshift space, measured from the Dark Sky simulations, using galaxy catalogs constructed with both halo occupation distribution and subhalo abundance matching methods, chosen to represent an intermediate redshift sample of luminous red galaxies. We find that the information content in individual µ (cosine of the angle to the line of sight) bins is substantially richer then multipole moments, and show that this can be used to isolate the impact of nonlinear growth and redshift space distortion (RSD) effects. Using the µ < 0.2 simulation data, which we show ismore » not impacted by RSD effects, we can successfully measure the nonlinear bias to an accuracy of ~ 5% at k < 0.6hMpc-1 . This use of individual µ bins to extract the nonlinear bias successfully removes a large parameter degeneracy when constraining the linear growth rate of structure. We carry out a joint parameter estimation, using the low µ simulation data to constrain the nonlinear bias, and µ > 0.2 to constrain the growth rate and show that f can be constrained to ~ 26(22)% to a kmax < 0.4(0.6)hMpc-1 from clustering alone using a simple dispersion model, for a range of galaxy models. Our analysis of individual µ bins also reveals interesting physical effects which arise simply from different methods of populating halos with galaxies. We also find a prominent turnaround scale, at which RSD damping effects are greater then the nonlinear growth, which differs not only for each µ bin but also for each galaxy model. These features may provide unique signatures which could be used to shed light on the galaxy–dark matter connection. Furthermore, the idea of separating nonlinear growth and RSD effects making use of the full information in the 2D galaxy power spectrum yields significant improvements in constraining cosmological parameters and may be a promising probe of galaxy formation models.« less
NASA Astrophysics Data System (ADS)
Stegen, Ronald; Gassmann, Matthias
2017-04-01
The use of a broad variation of agrochemicals is essential for the modern industrialized agriculture. During the last decades, the awareness of the side effects of their use has grown and with it the requirement to reproduce, understand and predict the behaviour of these agrochemicals in the environment, in order to optimize their use and minimize the side effects. The modern modelling has made great progress in understanding and predicting these chemicals with digital methods. While the behaviour of the applied chemicals is often investigated and modelled, most studies only simulate parent chemicals, considering total annihilation of the substance. However, due to a diversity of chemical, physical and biological processes, the substances are rather transformed into new chemicals, which themselves are transformed until, at the end of the chain, the substance is completely mineralized. During this process, the fate of each transformation product is determined by its own environmental characteristics and the pathway and results of transformation can differ largely by substance and environmental influences, that can occur in different compartments of the same site. Simulating transformation products introduces additional model uncertainties. Thus, the calibration effort increases compared to simulations of the transport and degradation of the primary substance alone. The simulation of the necessary physical processes needs a lot of calculation time. Due to that, few physically-based models offer the possibility to simulate transformation products at all, mostly at the field scale. The few models available for the catchment scale are not optimized for this duty, i.e. they are only able to simulate a single parent compound and up to two transformation products. Thus, for simulations of large physico-chemical parameter spaces, the enormous calculation time of the underlying hydrological model diminishes the overall performance. In this study, the structure of the model ZIN-AGRITRA is re-designed for the transport and transformation of an unlimited amount of agrochemicals in the soil-water-plant system at catchment scale. The focus is, besides a good hydrological standard, on a flexible variation of transformation processes and the optimization for the use of large numbers of different substances. Due to the new design, a reduction of the calculation time per tested substance is acquired, allowing faster testing of parameter spaces. Additionally, the new concept allows for the consideration of different transformation processes and products in different environmental compartments. A first test of calculation time improvements and flexible transformation pathways was performed in a Mediterranean meso-scale catchment, using the insecticide Chlorpyrifos and two of its transformation products, which emerge from different transformation processes, as test substances.
Examining a Thermodynamic Order Parameter of Protein Folding.
Chong, Song-Ho; Ham, Sihyun
2018-05-08
Dimensionality reduction with a suitable choice of order parameters or reaction coordinates is commonly used for analyzing high-dimensional time-series data generated by atomistic biomolecular simulations. So far, geometric order parameters, such as the root mean square deviation, fraction of native amino acid contacts, and collective coordinates that best characterize rare or large conformational transitions, have been prevailing in protein folding studies. Here, we show that the solvent-averaged effective energy, which is a thermodynamic quantity but unambiguously defined for individual protein conformations, serves as a good order parameter of protein folding. This is illustrated through the application to the folding-unfolding simulation trajectory of villin headpiece subdomain. We rationalize the suitability of the effective energy as an order parameter by the funneledness of the underlying protein free energy landscape. We also demonstrate that an improved conformational space discretization is achieved by incorporating the effective energy. The most distinctive feature of this thermodynamic order parameter is that it works in pointing to near-native folded structures even when the knowledge of the native structure is lacking, and the use of the effective energy will also find applications in combination with methods of protein structure prediction.
Free-Space Measurements of Dielectrics and Three-Dimensional Periodic Metamaterials
NASA Astrophysics Data System (ADS)
Kintner, Clifford E.
This thesis presents the free-space measurements of a periodic metamaterial structure. The metamaterial unit cell consists of two dielectric sheets intersecting at 90 degrees. The dielectric is a polyetherimide-based material 0.001" thick. Each sheet has a copper capacitively-loaded loop (CLL) structure on the front and a cut-wire structure on the back. Foam material is used to support the unit cells. The unit cell repeats 40 times in the x-direction, 58 times in the y-direction and 5 times in the z-direction. The sample measures 12" x 12" x 1" in total. We use a free-space broadband system comprised of a pair of dielectric-lens horn antennas with bandwidth from 5.8 GHz to 110 GHz, which are connected to a HP PNA series network analyzer. The dielectric lenses focus the incident beam to a footprint measuring 1 wavelength by 1 wavelength. The sample holder is positioned at the focal point between the two antennas. In this work, the coefficients of transmission and reflection (the S-parameters S21 and S11) are measured at frequencies from 12.4 GHz up to 30 GHz. Simulations are used to validate the measurements, using the Ansys HFSS commercial software package on the Arkansas High Performance Computing Center cluster. The simulation results successfully validate the S-parameters measurements, in particular the amplitudes. An algorithm based on the Nicolson-Ross-Weir (NRW) method is implemented to extract the permittivity and permeability values of the metamaterial under test. The results show epsilon-negative, mu-negative and double-negative parameters within the measured frequency range.
NASA Astrophysics Data System (ADS)
Sheikholeslami, R.; Hosseini, N.; Razavi, S.
2016-12-01
Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).
Computer-Aided System Engineering and Analysis (CASE/A) Programmer's Manual, Version 5.0
NASA Technical Reports Server (NTRS)
Knox, J. C.
1996-01-01
The Computer Aided System Engineering and Analysis (CASE/A) Version 5.0 Programmer's Manual provides the programmer and user with information regarding the internal structure of the CASE/A 5.0 software system. CASE/A 5.0 is a trade study tool that provides modeling/simulation capabilities for analyzing environmental control and life support systems and active thermal control systems. CASE/A has been successfully used in studies such as the evaluation of carbon dioxide removal in the space station. CASE/A modeling provides a graphical and command-driven interface for the user. This interface allows the user to construct a model by placing equipment components in a graphical layout of the system hardware, then connect the components via flow streams and define their operating parameters. Once the equipment is placed, the simulation time and other control parameters can be set to run the simulation based on the model constructed. After completion of the simulation, graphical plots or text files can be obtained for evaluation of the simulation results over time. Additionally, users have the capability to control the simulation and extract information at various times in the simulation (e.g., control equipment operating parameters over the simulation time or extract plot data) by using "User Operations (OPS) Code." This OPS code is written in FORTRAN with a canned set of utility subroutines for performing common tasks. CASE/A version 5.0 software runs under the VAX VMS(Trademark) environment. It utilizes the Tektronics 4014(Trademark) graphics display system and the VTIOO(Trademark) text manipulation/display system.
Development of a Crosslink Channel Simulator for Simulation of Formation Flying Satellite Systems
NASA Technical Reports Server (NTRS)
Hart, Roger; Hunt, Chris; Burns, Rich D.
2003-01-01
Multi-vehicle missions are an integral part of NASA s and other space agencies current and future business. These multi-vehicle missions generally involve collectively utilizing the array of instrumentation dispersed throughout the system of space vehicles, and communicating via crosslinks to achieve mission goals such as formation flying, autonomous operation, and collective data gathering. NASA s Goddard Space Flight Center (GSFC) is developing the Formation Flying Test Bed (FFTB) to provide hardware-in- the-loop simulation of these crosslink-based systems. The goal of the FFTB is to reduce mission risk, assist in mission planning and analysis, and provide a technology development platform that allows algorithms to be developed for mission hctions such as precision formation flying, synchronization, and inter-vehicle data synthesis. The FFTB will provide a medium in which the various crosslink transponders being used in multi-vehicle missions can be plugged in for development and test. An integral part of the FFTB is the Crosslink Channel Simulator (CCS),which is placed into the communications channel between the crosslinks under test, and is used to simulate on-orbit effects to the communications channel due to relative vehicle motion or antenna misalignment. The CCS is based on the Starlight software programmable platform developed at General Dynamics Decision Systems which provides the CCS with the ability to be modified on the fly to adapt to new crosslink formats or mission parameters.
NASA Astrophysics Data System (ADS)
Li, Xiao-Dong; Park, Changbom; Sabiu, Cristiano G.; Park, Hyunbae; Cheng, Cheng; Kim, Juhan; Hong, Sungwook E.
2017-08-01
We develop a methodology to use the redshift dependence of the galaxy 2-point correlation function (2pCF) across the line of sight, ξ ({r}\\perp ), as a probe of cosmological parameters. The positions of galaxies in comoving Cartesian space varies under different cosmological parameter choices, inducing a redshift-dependent scaling in the galaxy distribution. This geometrical distortion can be observed as a redshift-dependent rescaling in the measured ξ ({r}\\perp ). We test this methodology using a sample of 1.75 billion mock galaxies at redshifts 0, 0.5, 1, 1.5, and 2, drawn from the Horizon Run 4 N-body simulation. The shape of ξ ({r}\\perp ) can exhibit a significant redshift evolution when the galaxy sample is analyzed under a cosmology differing from the true, simulated one. Other contributions, including the gravitational growth of structure, galaxy bias, and the redshift space distortions, do not produce large redshift evolution in the shape. We show that one can make use of this geometrical distortion to constrain the values of cosmological parameters governing the expansion history of the universe. This method could be applicable to future large-scale structure surveys, especially photometric surveys such as DES and LSST, to derive tight cosmological constraints. This work is a continuation of our previous works as a strategy to constrain cosmological parameters using redshift-invariant physical quantities.
21SSD: a public data base of simulated 21-cm signals from the epoch of reionization
NASA Astrophysics Data System (ADS)
Semelin, B.; Eames, E.; Bolgar, F.; Caillat, M.
2017-12-01
The 21-cm signal from the epoch of reionization (EoR) is expected to be detected in the next few years, either with existing instruments or by the upcoming SKA and HERA projects. In this context, there is a pressing need for publicly available high-quality templates covering a wide range of possible signals. These are needed both for end-to-end simulations of the up-coming instruments and to develop signal analysis methods. We present such a set of templates, publicly available, for download at 21ssd.obspm.fr. The data base contains 21-cm brightness temperature lightcones at high and low resolution, and several derived statistical quantities for 45 models spanning our choice of 3D parameter space. These data are the result of fully coupled radiative hydrodynamic high-resolution (10243) simulations performed with the LICORICE code. Both X-ray and Lyman line transfer are performed to account for heating and Wouthuysen-Field coupling fluctuations. We also present a first exploitation of the data using the power spectrum and the pixel distribution function (PDF) computed from lightcone data. We analyse how these two quantities behave when varying the model parameters while taking into account the thermal noise expected of a typical SKA survey. Finally, we show that the noiseless power spectrum and PDF have different - and somewhat complementary - abilities to distinguish between different models. This preliminary result will have to be expanded to the case including thermal noise. This type of results opens the door to formulating an optimal sampling of the parameter space, dependent on the chosen diagnostics.
Design Through Simulation of a Molecular Sieve Column for Treatment of MON-3
NASA Technical Reports Server (NTRS)
Swartz, A. Ben; Wilson, D. B.
1999-01-01
The presence of water in propellant-grade MON-3 is a concern in the Aerospace Industry. NASA Johnson Space Center (JSC), White Sands Test Facility (WSTF) Propulsion Department has evaluated many types of molecular sieves for control of iron, the corrosion product of water in Mixed Oxides of Nitrogen (MON-3). In 1995, WSTF initiated laboratory and pilot-scale testing of molecular sieve type 3A for removal of water and iron. These tests showed sufficient promise that a series of continuous recycle tests were conducted at WSTF. Periodic samples of the circulating MON-3 solution were analyzed for water (wt %) and iron (ppm, wt). This test column was modeled as a series of transfer units; i. e., each unit represented the height equivalent of a theoretical plate. Such a model assumes there is equilibrium between the adsorbent material and the effluent stream from the unit. Operational and design parameters were derived based on the simulation results. These parameters were used to predict the design characteristics of a proposed molecular sieve column for removal of water and iron from MON-3 at the NASA Kennedy Space Center (KSC). In addition, these parameters were used to simulate a small, single-pass operation column at KSC currently used for treating MON-3. The results of this work indicated that molecular sieve type 3A in 1/16 in. diameter pellets, in a column 2.5 ft. in diameter, 18 ft. in height, and operated at 25 gpm is adequate for the required removal of water and iron from MON-3.
NASA Technical Reports Server (NTRS)
Wang, J.; Biasca, R.; Liewer, P. C.
1996-01-01
Although the existence of the critical ionization velocity (CIV) is known from laboratory experiments, no agreement has been reached as to whether CIV exists in the natural space environment. In this paper we move towards more realistic models of CIV and present the first fully three-dimensional, electromagnetic particle-in-cell Monte-Carlo collision (PIC-MCC) simulations of typical space-based CIV experiments. In our model, the released neutral gas is taken to be a spherical cloud traveling across a magnetized ambient plasma. Simulations are performed for neutral clouds with various sizes and densities. The effects of the cloud parameters on ionization yield, wave energy growth, electron heating, momentum coupling, and the three-dimensional structure of the newly ionized plasma are discussed. The simulations suggest that the quantitative characteristics of momentum transfers among the ion beam, neutral cloud, and plasma waves is the key indicator of whether CIV can occur in space. The missing factors in space-based CIV experiments may be the conditions necessary for a continuous enhancement of the beam ion momentum. For a typical shaped charge release experiment, favorable CIV conditions may exist only in a very narrow, intermediate spatial region some distance from the release point due to the effects of the cloud density and size. When CIV does occur, the newly ionized plasma from the cloud forms a very complex structure due to the combined forces from the geomagnetic field, the motion induced emf, and the polarization. Hence the detection of CIV also critically depends on the sensor location.
Effect of Real and Simulated Microgravity on Muscle Function
NASA Technical Reports Server (NTRS)
1997-01-01
In this session, Session JA3, the discussion focuses on the following topics: Changes in Calf Muscle Performance, Energy Metabolism, and Muscle Volume Caused by Long Term Stay on Space Station MIR; Vibrografic Signs of Autonomous Muscle Tone Studied in Long Term Space Missions; Reduction of Muscle Strength After Long Duration Space Flights is Associated Primarily with Changes in Neuromuscular Function; The Effects of a 115-Day Spaceflight on Neuromuscular Function in Crewman; Effects of 17-Day Spaceflight on Human Triceps Surae Electrically-Evoked Contractions; Effects of Muscle Unloading on EMG Spectral Parameters; and Myofiber Wound-Mediated FGF Release and Muscle Atrophy During Bedrest.
Retrieving the aerosol lidar ratio profile by combining ground- and space-based elastic lidars.
Feiyue, Mao; Wei, Gong; Yingying, Ma
2012-02-15
The aerosol lidar ratio is a key parameter for the retrieval of aerosol optical properties from elastic lidar, which changes largely for aerosols with different chemical and physical properties. We proposed a method for retrieving the aerosol lidar ratio profile by combining simultaneous ground- and space-based elastic lidars. The method was tested by a simulated case and a real case at 532 nm wavelength. The results demonstrated that our method is robust and can obtain accurate lidar ratio and extinction coefficient profiles. Our method can be useful for determining the local and global lidar ratio and validating space-based lidar datasets.
NASA Astrophysics Data System (ADS)
Walker, W.; Ardebili, H.
2014-12-01
Lithium-ion batteries (LIBs) are replacing the Nickel-Hydrogen batteries used on the International Space Station (ISS). Knowing that LIB efficiency and survivability are greatly influenced by temperature, this study focuses on the thermo-electrochemical analysis of LIBs in space orbit. Current finite element modeling software allows for advanced simulation of the thermo-electrochemical processes; however the heat transfer simulation capabilities of said software suites do not allow for the extreme complexities of orbital-space environments like those experienced by the ISS. In this study, we have coupled the existing thermo-electrochemical models representing heat generation in LIBs during discharge cycles with specialized orbital-thermal software, Thermal Desktop (TD). Our model's parameters were obtained from a previous thermo-electrochemical model of a 185 Amp-Hour (Ah) LIB with 1-3 C (C) discharge cycles for both forced and natural convection environments at 300 K. Our TD model successfully simulates the temperature vs. depth-of-discharge (DOD) profiles and temperature ranges for all discharge and convection variations with minimal deviation through the programming of FORTRAN logic representing each variable as a function of relationship to DOD. Multiple parametrics were considered in a second and third set of cases whose results display vital data in advancing our understanding of accurate thermal modeling of LIBs.
Tang, Shujie; Meng, Xueying
2011-01-01
The restoration of disc space height of fused segment is essential in anterior lumbar interbody fusion, while the disc space height in many cases decreased postoperatively, which may adversely aggravate the adjacent segmental degeneration. However, no literature available focused on the issue. A normal healthy finite element model of L3-5 and four anterior lumbar interbody fusion models with different disc space height of fused segment were developed. 800 N compressive loading plus 10 Nm moments simulating flexion, extension, lateral bending and axial rotation were imposed on L3 superior endplate. The intradiscal pressure, the intersegmental rotation, the tresca stress and contact force of facet joints in L3-4 were investigated. Anterior lumbar interbody fusion with severely decreased disc space height presented with the highest values of the four parameters, and the normal healthy model presented with the lowest values except, under extension, the contact force of facet joints in normal healthy model is higher than that in normal anterior lumbar interbody fusion model. With disc space height decrease, the values of parameters in each anterior lumbar interbody fusion model increase gradually. Anterior lumbar interbody fusion with decreased disc space height aggravate the adjacent segmental degeneration more adversely.
Design for approaching Cicada-wing reflectance in low- and high-index biomimetic nanostructures.
Huang, Yi-Fan; Jen, Yi-Jun; Chen, Li-Chyong; Chen, Kuei-Hsien; Chattopadhyay, Surojit
2015-01-27
Natural nanostructures in low refractive index Cicada wings demonstrate ≤ 1% reflectance over the visible spectrum. We provide design parameters for Cicada-wing-inspired nanotip arrays as efficient light harvesters over a 300-1000 nm spectrum and up to 60° angle of incidence in both low-index, such as silica and indium tin oxide, and high-index, such as silicon and germanium, photovoltaic materials. Biomimicry of the Cicada wing design, demonstrating gradient index, onto these material surfaces, either by real electron cyclotron resonance microwave plasma processing or by modeling, was carried out to achieve a target reflectance of ∼ 1%. Design parameters of spacing/wavelength and length/spacing fitted into a finite difference time domain model could simulate the experimental reflectance values observed in real silicon and germanium or in model silica and indium tin oxide nanotip arrays. A theoretical mapping of the length/spacing and spacing/wavelength space over varied refractive index materials predicts that lengths of ∼ 1.5 μm and spacings of ∼ 200 nm in high-index and lengths of ∼ 200-600 nm and spacings of ∼ 100-400 nm in low-index materials would exhibit ≤ 1% target reflectance and ∼ 99% optical absorption over the entire UV-vis region and angle of incidence up to 60°.
Wang, Hongyuan; Zhang, Wei; Dong, Aotuo
2012-11-10
A modeling and validation method of photometric characteristics of the space target was presented in order to track and identify different satellites effectively. The background radiation characteristics models of the target were built based on blackbody radiation theory. The geometry characteristics of the target were illustrated by the surface equations based on its body coordinate system. The material characteristics of the target surface were described by a bidirectional reflectance distribution function model, which considers the character of surface Gauss statistics and microscale self-shadow and is obtained by measurement and modeling in advance. The contributing surfaces of the target to observation system were determined by coordinate transformation according to the relative position of the space-based target, the background radiation sources, and the observation platform. Then a mathematical model on photometric characteristics of the space target was built by summing reflection components of all the surfaces. Photometric characteristics simulation of the space-based target was achieved according to its given geometrical dimensions, physical parameters, and orbital parameters. Experimental validation was made based on the scale model of the satellite. The calculated results fit well with the measured results, which indicates the modeling method of photometric characteristics of the space target is correct.
Nonequilibrium umbrella sampling in spaces of many order parameters
NASA Astrophysics Data System (ADS)
Dickson, Alex; Warmflash, Aryeh; Dinner, Aaron R.
2009-02-01
We recently introduced an umbrella sampling method for obtaining nonequilibrium steady-state probability distributions projected onto an arbitrary number of coordinates that characterize a system (order parameters) [A. Warmflash, P. Bhimalapuram, and A. R. Dinner, J. Chem. Phys. 127, 154112 (2007)]. Here, we show how our algorithm can be combined with the image update procedure from the finite-temperature string method for reversible processes [E. Vanden-Eijnden and M. Venturoli, "Revisiting the finite temperature string method for calculation of reaction tubes and free energies," J. Chem. Phys. (in press)] to enable restricted sampling of a nonequilibrium steady state in the vicinity of a path in a many-dimensional space of order parameters. For the study of transitions between stable states, the adapted algorithm results in improved scaling with the number of order parameters and the ability to progressively refine the regions of enforced sampling. We demonstrate the algorithm by applying it to a two-dimensional model of driven Brownian motion and a coarse-grained (Ising) model for nucleation under shear. It is found that the choice of order parameters can significantly affect the convergence of the simulation; local magnetization variables other than those used previously for sampling transition paths in Ising systems are needed to ensure that the reactive flux is primarily contained within a tube in the space of order parameters. The relation of this method to other algorithms that sample the statistics of path ensembles is discussed.
Exploration Supply Chain Simulation
NASA Technical Reports Server (NTRS)
2008-01-01
The Exploration Supply Chain Simulation project was chartered by the NASA Exploration Systems Mission Directorate to develop a software tool, with proper data, to quantitatively analyze supply chains for future program planning. This tool is a discrete-event simulation that uses the basic supply chain concepts of planning, sourcing, making, delivering, and returning. This supply chain perspective is combined with other discrete or continuous simulation factors. Discrete resource events (such as launch or delivery reviews) are represented as organizational functional units. Continuous resources (such as civil service or contractor program functions) are defined as enabling functional units. Concepts of fixed and variable costs are included in the model to allow the discrete events to interact with cost calculations. The definition file is intrinsic to the model, but a blank start can be initiated at any time. The current definition file is an Orion Ares I crew launch vehicle. Parameters stretch from Kennedy Space Center across and into other program entities (Michaud Assembly Facility, Aliant Techsystems, Stennis Space Center, Johnson Space Center, etc.) though these will only gain detail as the file continues to evolve. The Orion Ares I file definition in the tool continues to evolve, and analysis from this tool is expected in 2008. This is the first application of such business-driven modeling to a NASA/government-- aerospace contractor endeavor.
Electron cooling of a bunched ion beam in a storage ring
NASA Astrophysics Data System (ADS)
Zhao, He; Mao, Lijun; Yang, Jiancheng; Xia, Jiawen; Yang, Xiaodong; Li, Jie; Tang, Meitang; Shen, Guodong; Ma, Xiaoming; Wu, Bo; Wang, Geng; Ruan, Shuang; Wang, Kedong; Dong, Ziqiang
2018-02-01
A combination of electron cooling and rf system is an effective method to compress the beam bunch length in storage rings. A simulation code based on multiparticle tracking was developed to calculate the bunched ion beam cooling process, in which the electron cooling, intrabeam scattering (IBS), ion beam space-charge field, transverse and synchrotron motion are considered. Meanwhile, bunched ion beam cooling experiments have been carried out in the main cooling storage ring (CSRm) of the Heavy Ion Research Facility in Lanzhou, to investigate the minimum bunch length obtained by the cooling method, and study the dependence of the minimum bunch length on beam and machine parameters. The experiments show comparable results to those from simulation. Based on these simulations and experiments, we established an analytical model to describe the limitation of the bunch length of the cooled ion beam. It is observed that the IBS effect is dominant for low intensity beams, and the space-charge effect is much more important for high intensity beams. Moreover, the particles will not be bunched for much higher intensity beam. The experimental results in CSRm show a good agreement with the analytical model in the IBS dominated regime. The simulation work offers us comparable results to those from the analytical model both in IBS dominated and space-charge dominated regimes.
Numerical relativity waveform surrogate model for generically precessing binary black hole mergers
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott E.; Scheel, Mark A.; Galley, Chad R.; Ott, Christian D.; Boyle, Michael; Kidder, Lawrence E.; Pfeiffer, Harald P.; Szilágyi, Béla
2017-07-01
A generic, noneccentric binary black hole (BBH) system emits gravitational waves (GWs) that are completely described by seven intrinsic parameters: the black hole spin vectors and the ratio of their masses. Simulating a BBH coalescence by solving Einstein's equations numerically is computationally expensive, requiring days to months of computing resources for a single set of parameter values. Since theoretical predictions of the GWs are often needed for many different source parameters, a fast and accurate model is essential. We present the first surrogate model for GWs from the coalescence of BBHs including all seven dimensions of the intrinsic noneccentric parameter space. The surrogate model, which we call NRSur7dq2, is built from the results of 744 numerical relativity simulations. NRSur7dq2 covers spin magnitudes up to 0.8 and mass ratios up to 2, includes all ℓ≤4 modes, begins about 20 orbits before merger, and can be evaluated in ˜50 ms . We find the largest NRSur7dq2 errors to be comparable to the largest errors in the numerical relativity simulations, and more than an order of magnitude smaller than the errors of other waveform models. Our model, and more broadly the methods developed here, will enable studies that were not previously possible when using highly accurate waveforms, such as parameter inference and tests of general relativity with GW observations.
A numerical identifiability test for state-space models--application to optimal experimental design.
Hidalgo, M E; Ayesa, E
2001-01-01
This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.
Constraints on the pre-impact orbits of Solar system giant impactors
NASA Astrophysics Data System (ADS)
Jackson, Alan P.; Gabriel, Travis S. J.; Asphaug, Erik I.
2018-03-01
We provide a fast method for computing constraints on impactor pre-impact orbits, applying this to the late giant impacts in the Solar system. These constraints can be used to make quick, broad comparisons of different collision scenarios, identifying some immediately as low-probability events, and narrowing the parameter space in which to target follow-up studies with expensive N-body simulations. We benchmark our parameter space predictions, finding good agreement with existing N-body studies for the Moon. We suggest that high-velocity impact scenarios in the inner Solar system, including all currently proposed single impact scenarios for the formation of Mercury, should be disfavoured. This leaves a multiple hit-and-run scenario as the most probable currently proposed for the formation of Mercury.
LANES - LOCAL AREA NETWORK EXTENSIBLE SIMULATOR
NASA Technical Reports Server (NTRS)
Gibson, J.
1994-01-01
The Local Area Network Extensible Simulator (LANES) provides a method for simulating the performance of high speed local area network (LAN) technology. LANES was developed as a design and analysis tool for networking on board the Space Station. The load, network, link and physical layers of a layered network architecture are all modeled. LANES models to different lower-layer protocols, the Fiber Distributed Data Interface (FDDI) and the Star*Bus. The load and network layers are included in the model as a means of introducing upper-layer processing delays associated with message transmission; they do not model any particular protocols. FDDI is an American National Standard and an International Organization for Standardization (ISO) draft standard for a 100 megabit-per-second fiber-optic token ring. Specifications for the LANES model of FDDI are taken from the Draft Proposed American National Standard FDDI Token Ring Media Access Control (MAC), document number X3T9.5/83-16 Rev. 10, February 28, 1986. This is a mature document describing the FDDI media-access-control protocol. Star*Bus, also known as the Fiber Optic Demonstration System, is a protocol for a 100 megabit-per-second fiber-optic star-topology LAN. This protocol, along with a hardware prototype, was developed by Sperry Corporation under contract to NASA Goddard Space Flight Center as a candidate LAN protocol for the Space Station. LANES can be used to analyze performance of a networking system based on either FDDI or Star*Bus under a variety of loading conditions. Delays due to upper-layer processing can easily be nullified, allowing analysis of FDDI or Star*Bus as stand-alone protocols. LANES is a parameter-driven simulation; it provides considerable flexibility in specifying both protocol an run-time parameters. Code has been optimized for fast execution and detailed tracing facilities have been included. LANES was written in FORTRAN 77 for implementation on a DEC VAX under VMS 4.6. It consists of two programs, a simulation program and a user-interface program. The simulation program requires the SLAM II simulation library from Pritsker and Associates, W. Lafayette IN; the user interface is implemented using the Ingres database manager from Relational Technology, Inc. Information about running the simulation program without the user-interface program is contained in the documentation. The memory requirement is 129,024 bytes. LANES was developed in 1988.
Numerical Simulation of the Flow over a Segment-Conical Body on the Basis of Reynolds Equations
NASA Astrophysics Data System (ADS)
Egorov, I. V.; Novikov, A. V.; Palchekovskaya, N. V.
2018-01-01
Numerical simulation was used to study the 3D supersonic flow over a segment-conical body similar in shape to the ExoMars space vehicle. The nonmonotone behavior of the normal force acting on the body placed in a supersonic gas flow was analyzed depending on the angle of attack. The simulation was based on the numerical solution of the unsteady Reynolds-averaged Navier-Stokes equations with a two-parameter differential turbulence model. The solution of the problem was obtained using the in-house solver HSFlow with an efficient parallel algorithm intended for multiprocessor super computers.
NASA Technical Reports Server (NTRS)
Geisler, J. E.; Fowlis, W. W.
1980-01-01
The effect of a power law gravity field on baroclinic instability is examined, with a focus on the case of inverse fifth power gravity, since this is the power law produced when terrestrial gravity is simulated in spherical geometry by a dielectric force. Growth rates are obtained of unstable normal modes as a function of parameters of the problem by solving a second order differential equation numerically. It is concluded that over the range of parameter space explored, there is no significant change in the character of theoretical regime diagrams if the vertically averaged gravity is used as parameter.
An Eigensystem Realization Algorithm (ERA) for modal parameter identification and model reduction
NASA Technical Reports Server (NTRS)
Juang, J. N.; Pappa, R. S.
1985-01-01
A method, called the Eigensystem Realization Algorithm (ERA), is developed for modal parameter identification and model reduction of dynamic systems from test data. A new approach is introduced in conjunction with the singular value decomposition technique to derive the basic formulation of minimum order realization which is an extended version of the Ho-Kalman algorithm. The basic formulation is then transformed into modal space for modal parameter identification. Two accuracy indicators are developed to quantitatively identify the system modes and noise modes. For illustration of the algorithm, examples are shown using simulation data and experimental data for a rectangular grid structure.
Quantitative Diagnosis of Continuous-Valued, Stead-State Systems
NASA Technical Reports Server (NTRS)
Rouquette, N.
1995-01-01
Quantitative diagnosis involves numerically estimating the values of unobservable parameters that best explain the observed parameter values. We consider quantitative diagnosis for continuous, lumped- parameter, steady-state physical systems because such models are easy to construct and the diagnosis problem is considerably simpler than that for corresponding dynamic models. To further tackle the difficulties of numerically inverting a simulation model to compute a diagnosis, we propose to decompose a physical system model in terms of feedback loops. This decomposition reduces the dimension of the problem and consequently decreases the diagnosis search space. We illustrate this approach on a model of thermal control system studied in earlier research.
Hazard assessment of long-period ground motions for the Nankai Trough earthquakes
NASA Astrophysics Data System (ADS)
Maeda, T.; Morikawa, N.; Aoi, S.; Fujiwara, H.
2013-12-01
We evaluate a seismic hazard for long-period ground motions associated with the Nankai Trough earthquakes (M8~9) in southwest Japan. Large interplate earthquakes occurring around the Nankai Trough have caused serious damages due to strong ground motions and tsunami; most recent events were in 1944 and 1946. Such large interplate earthquake potentially causes damages to high-rise and large-scale structures due to long-period ground motions (e.g., 1985 Michoacan earthquake in Mexico, 2003 Tokachi-oki earthquake in Japan). The long-period ground motions are amplified particularly on basins. Because major cities along the Nankai Trough have developed on alluvial plains, it is therefore important to evaluate long-period ground motions as well as strong motions and tsunami for the anticipated Nankai Trough earthquakes. The long-period ground motions are evaluated by the finite difference method (FDM) using 'characterized source models' and the 3-D underground structure model. The 'characterized source model' refers to a source model including the source parameters necessary for reproducing the strong ground motions. The parameters are determined based on a 'recipe' for predicting strong ground motion (Earthquake Research Committee (ERC), 2009). We construct various source models (~100 scenarios) giving the various case of source parameters such as source region, asperity configuration, and hypocenter location. Each source region is determined by 'the long-term evaluation of earthquakes in the Nankai Trough' published by ERC. The asperity configuration and hypocenter location control the rupture directivity effects. These parameters are important because our preliminary simulations are strongly affected by the rupture directivity. We apply the system called GMS (Ground Motion Simulator) for simulating the seismic wave propagation based on 3-D FDM scheme using discontinuous grids (Aoi and Fujiwara, 1999) to our study. The grid spacing for the shallow region is 200 m and 100 m in horizontal and vertical, respectively. The grid spacing for the deep region is three times coarser. The total number of grid points is about three billion. The 3-D underground structure model used in the FD simulation is the Japan integrated velocity structure model (ERC, 2012). Our simulation is valid for period more than two seconds due to the lowest S-wave velocity and grid spacing. However, because the characterized source model may not sufficiently support short period components, we should be interpreted the reliable period of this simulation with caution. Therefore, we consider the period more than five seconds instead of two seconds for further analysis. We evaluate the long-period ground motions using the velocity response spectra for the period range between five and 20 second. The preliminary simulation shows a large variation of response spectra at a site. This large variation implies that the ground motion is very sensitive to different scenarios. And it requires studying the large variation to understand the seismic hazard. Our further study will obtain the hazard curves for the Nankai Trough earthquake (M 8~9) by applying the probabilistic seismic hazard analysis to the simulation results.
Sampling ARG of multiple populations under complex configurations of subdivision and admixture.
Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi
2016-04-01
Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement.
NASA Technical Reports Server (NTRS)
McCaul, Eugene W., Jr.; Cohen, Charles; Kirkpatrick, Cody
2004-01-01
Prior parameter space studies of simulated deep convection are extended to embrace variations in the ambient temperature at the Lifted Condensation Level (LCL). Within the context of the parameter space study design, changes in LCL temperature are roughly equivalent to changes in the ambient precipitable water. Two series of simulations are conducted, one with a warm LCL that is associated with approximately 60 mm of precipitable water, and another with LCL temperatures 8 C cooler, so that PW is reduced to roughly 30 mm. The sets of simulations include tests of the impact of changes in the buoyancy and shear profile shapes and of changes in mixed and moist layer depths, all of which have been shown to be important in prior work. Simulations discussed here feature values of bulk convective available potential energy (CAPE) of 800, 2000, or 3200 Joules per kilogram, and a single semicircular hodograph having radius of 12 meters per second, but with variable vertical shear. The simulations reveal a consistent trend toward stronger peak updraft speeds for the cooler LCL temperature (reduced PW) cases, if all other environmental parameters are held constant. Roughly comparable increases in updraft speeds are noted for all combinations of LCL and level of free convection heights. These increases in updraft strength are evidently the result of both the reduction of condensate loading aloft and the lower altitudes at which the latent heat release by freezing and deposition commences in the cooler, low-PW environments. Because the latent heat of fusion adds relatively more energy to the updrafts at low CAPE, those storms show more strengthening at low PW than do the larger CAPE storms. As expected, maximum storm precipitation rates tend to diminish as PW is decreased, but only slightly, and by amounts not proportionate to the decrease in PW. The low-PW cases thus actually feature larger environment-relative precipitation efficiency than do the high-PW cases. In addition, more hail reaches the surface in the low-PW cases because of reduced melting in the cooler environments.
[Computer simulation of a clinical magnet resonance tomography scanner for training purposes].
Hackländer, T; Mertens, H; Cramer, B M
2004-08-01
The idea for this project was born by the necessity to offer medical students an easy approach to the theoretical basics of magnetic resonance imaging. The aim was to simulate the features and functions of such a scanner on a commercially available computer by means of a computer program. The simulation was programmed in pure Java under the GNU General Public License and is freely available for a commercially available computer with Windows, Macintosh or Linux operating system. The graphic user interface is oriented to a real scanner. In an external program parameter, images for the proton density and the relaxation times T1 and T2 are calculated on the basis of clinical examinations. From this, the image calculation is carried out in the simulation program pixel by pixel on the basis of a pulse sequence chosen and modified by the user. The images can be stored and printed. In addition, it is possible to display and modify k-space images. Seven classes of pulse sequences are implemented and up to 14 relevant sequence parameters, such as repetition time and echo time, can be altered. Aliasing and motion artifacts can be simulated. As the image calculation only takes a few seconds, interactive working is possible. The simulation has been used in the university education for more than 1 year, successfully illustrating the dependence of the MR images on the measuring parameters. This should facititate the approach of students to the understanding MR imaging in the future.
Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.
2013-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.
Massive data compression for parameter-dependent covariance matrices
NASA Astrophysics Data System (ADS)
Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise
2017-12-01
We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.
Hyperspectral imaging simulation of object under sea-sky background
NASA Astrophysics Data System (ADS)
Wang, Biao; Lin, Jia-xuan; Gao, Wei; Yue, Hui
2016-10-01
Remote sensing image simulation plays an important role in spaceborne/airborne load demonstration and algorithm development. Hyperspectral imaging is valuable in marine monitoring, search and rescue. On the demand of spectral imaging of objects under the complex sea scene, physics based simulation method of spectral image of object under sea scene is proposed. On the development of an imaging simulation model considering object, background, atmosphere conditions, sensor, it is able to examine the influence of wind speed, atmosphere conditions and other environment factors change on spectral image quality under complex sea scene. Firstly, the sea scattering model is established based on the Philips sea spectral model, the rough surface scattering theory and the water volume scattering characteristics. The measured bi directional reflectance distribution function (BRDF) data of objects is fit to the statistical model. MODTRAN software is used to obtain solar illumination on the sea, sky brightness, the atmosphere transmittance from sea to sensor and atmosphere backscattered radiance, and Monte Carlo ray tracing method is used to calculate the sea surface object composite scattering and spectral image. Finally, the object spectrum is acquired by the space transformation, radiation degradation and adding the noise. The model connects the spectrum image with the environmental parameters, the object parameters, and the sensor parameters, which provide a tool for the load demonstration and algorithm development.
Wieser, Stefan; Axmann, Markus; Schütz, Gerhard J.
2008-01-01
We propose here an approach for the analysis of single-molecule trajectories which is based on a comprehensive comparison of an experimental data set with multiple Monte Carlo simulations of the diffusion process. It allows quantitative data analysis, particularly whenever analytical treatment of a model is infeasible. Simulations are performed on a discrete parameter space and compared with the experimental results by a nonparametric statistical test. The method provides a matrix of p-values that assess the probability for having observed the experimental data at each setting of the model parameters. We show the testing approach for three typical situations observed in the cellular plasma membrane: i), free Brownian motion of the tracer, ii), hop diffusion of the tracer in a periodic meshwork of squares, and iii), transient binding of the tracer to slowly diffusing structures. By plotting the p-value as a function of the model parameters, one can easily identify the most consistent parameter settings but also recover mutual dependencies and ambiguities which are difficult to determine by standard fitting routines. Finally, we used the test to reanalyze previous data obtained on the diffusion of the glycosylphosphatidylinositol-protein CD59 in the plasma membrane of the human T24 cell line. PMID:18805933
Orbital evolution of space debris due to aerodynamic forces
NASA Astrophysics Data System (ADS)
Crowther, R.
1993-08-01
The concepts used in the AUDIT (Assessment Using Debris Impact Theory) debris modelling suite are introduced. A sensitivity analysis is carried out to determine the dominant parameters in the modelling process. A test case simulating the explosion of a satellite suggest that at the parent altitude there is a greater probability of collision with more massive fragments.
Chimera states in Gaussian coupled map lattices
NASA Astrophysics Data System (ADS)
Li, Xiao-Wen; Bi, Ran; Sun, Yue-Xiang; Zhang, Shuo; Song, Qian-Qian
2018-04-01
We study chimera states in one-dimensional and two-dimensional Gaussian coupled map lattices through simulations and experiments. Similar to the case of global coupling oscillators, individual lattices can be regarded as being controlled by a common mean field. A space-dependent order parameter is derived from a self-consistency condition in order to represent the collective state.
Design and experimentally measure a high performance metamaterial filter
NASA Astrophysics Data System (ADS)
Xu, Ya-wen; Xu, Jing-cheng
2018-03-01
Metamaterial filter is a kind of expecting optoelectronic device. In this paper, a metal/dielectric/metal (M/D/M) structure metamaterial filter is simulated and measured. Simulated results indicate that the perfect impedance matching condition between the metamaterial filter and the free space leads to the transmission band. Measured results show that the proposed metamaterial filter achieves high performance transmission on TM and TE polarization directions. Moreover, the high transmission rate is also can be obtained when the incident angle reaches to 45°. Further measured results show that the transmission band can be expanded through optimizing structural parameters. The central frequency of the transmission band is also can be adjusted through optimizing structural parameters. The physical mechanism behind the central frequency shifted is solved through establishing an equivalent resonant circuit model.
Free-decay time-domain modal identification for large space structures
NASA Technical Reports Server (NTRS)
Kim, Hyoung M.; Vanhorn, David A.; Doiron, Harold H.
1992-01-01
Concept definition studies for the Modal Identification Experiment (MIE), a proposed space flight experiment for the Space Station Freedom (SSF), have demonstrated advantages and compatibility of free-decay time-domain modal identification techniques with the on-orbit operational constraints of large space structures. Since practical experience with modal identification using actual free-decay responses of large space structures is very limited, several numerical and test data reduction studies were conducted. Major issues and solutions were addressed, including closely-spaced modes, wide frequency range of interest, data acquisition errors, sampling delay, excitation limitations, nonlinearities, and unknown disturbances during free-decay data acquisition. The data processing strategies developed in these studies were applied to numerical simulations of the MIE, test data from a deployable truss, and launch vehicle flight data. Results of these studies indicate free-decay time-domain modal identification methods can provide accurate modal parameters necessary to characterize the structural dynamics of large space structures.
Aerospace Applications Conference, Steamboat Springs, CO, Feb. 1-8, 1986, Digest
NASA Astrophysics Data System (ADS)
The present conference considers topics concerning the projected NASA Space Station's systems, digital signal and data processing applications, and space science and microwave applications. Attention is given to Space Station video and audio subsystems design, clock error, jitter, phase error and differential time-of-arrival in satellite communications, automation and robotics in space applications, target insertion into synthetic background scenes, and a novel scheme for the computation of the discrete Fourier transform on a systolic processor. Also discussed are a novel signal parameter measurement system employing digital signal processing, EEPROMS for spacecraft applications, a unique concurrent processor architecture for high speed simulation of dynamic systems, a dual polarization flat plate antenna, Fresnel diffraction, and ultralinear TWTs for high efficiency satellite communications.
Approaching control for tethered space robot based on disturbance observer using super twisting law
NASA Astrophysics Data System (ADS)
Hu, Yongxin; Huang, Panfeng; Meng, Zhongjie; Wang, Dongke; Lu, Yingbo
2018-05-01
Approaching control is a key mission for the tethered space robot to perform the task of removing space debris. But the uncertainties of the TSR such as the change of model parameter have an important effect on the approaching mission. Considering the space tether and the attitude of the gripper, the dynamic model of the TSR is derived using Lagrange method. Then a disturbance observer is designed to estimate the uncertainty based on STW control method. Using the disturbance observer, a controller is designed, and the performance is compared with the dynamic inverse controller which turns out that the proposed controller performs better. Numerical simulation validates the feasibility of the proposed controller on the position and attitude tracking of the TSR.
Simulation of MEMS for the Next Generation Space Telescope
NASA Technical Reports Server (NTRS)
Mott, Brent; Kuhn, Jonathan; Broduer, Steve (Technical Monitor)
2001-01-01
The NASA Goddard Space Flight Center (GSFC) is developing optical micro-electromechanical system (MEMS) components for potential application in Next Generation Space Telescope (NGST) science instruments. In this work, we present an overview of the electro-mechanical simulation of three MEMS components for NGST, which include a reflective micro-mirror array and transmissive microshutter array for aperture control for a near infrared (NIR) multi-object spectrometer and a large aperture MEMS Fabry-Perot tunable filter for a NIR wide field camera. In all cases the device must operate at cryogenic temperatures with low power consumption and low, complementary metal oxide semiconductor (CMOS) compatible, voltages. The goal of our simulation efforts is to adequately predict both the performance and the reliability of the devices during ground handling, launch, and operation to prevent failures late in the development process and during flight. This goal requires detailed modeling and validation of complex electro-thermal-mechanical interactions and very large non-linear deformations, often involving surface contact. Various parameters such as spatial dimensions and device response are often difficult to measure reliably at these small scales. In addition, these devices are fabricated from a wide variety of materials including surface micro-machined aluminum, reactive ion etched (RIE) silicon nitride, and deep reactive ion etched (DRIE) bulk single crystal silicon. The above broad set of conditions combine to be a formidable challenge for space flight qualification analysis. These simulations represent NASA/GSFC's first attempts at implementing a comprehensive strategy to address complex MEMS structures.
NASA Technical Reports Server (NTRS)
Stone, H. W.; Powell, R. W.
1977-01-01
A six-degree-of-freedom simulation analysis was conducted to examine the effects of longitudinal static aerodynamic stability and control uncertainties on the performance of the space shuttle orbiter automatic (no manual inputs) entry guidance and control systems. To establish the acceptable boundaries, the static aerodynamic characteristics were varied either by applying a multiplier to the aerodynamic parameter or by adding an increment. With either of two previously identified control system modifications included, the acceptable longitudinal aerodynamic boundaries were determined.
Detection of co-seismic earthquake gravity field signals using GRACE-like mission simulations
NASA Astrophysics Data System (ADS)
Sharifi, Mohammad Ali; Shahamat, Abolfazl
2017-05-01
After launching the GRACE satellite mission in 2002, the earth's gravity field and its temporal variations are measured with a closer inspection. Although these variations are mainly because of the mass transfer of land water storage, they can also happen due to mass movements related to some natural phenomena including earthquakes, volcanic eruptions, melting of polar ice caps and glacial isostatic adjustment. Therefore this paper shows which parameters of an earthquake are more sensitive to GRACE-Like satellite missions. For this purpose, the parameters of the Maule earthquake that occurred in recent years and Alaska earthquake that occurred in 1964 have been chosen. Then we changed their several parameters to serve our purpose. The GRACE-Like sensitivity is observed by using the simulation of the earthquakes along with gravity changes they caused, as well as using dislocation theory under a half space earth. This observation affects the various faulting parameters which include fault length, width, depth and average slip. These changes were therefore evaluated and the result shows that the GRACE satellite missions tend to be more sensitive to Width among the Length and Width, the other parameter is Dip variations than other parameters. This article can be useful to the upcoming scenario designers and seismologists in their quest to study fault parameters.
A satellite simulator for TRMM PR applied to climate model simulations
NASA Astrophysics Data System (ADS)
Spangehl, T.; Schroeder, M.; Bodas-Salcedo, A.; Hollmann, R.; Riley Dellaripa, E. M.; Schumacher, C.
2017-12-01
Climate model simulations have to be compared against observation based datasets in order to assess their skill in representing precipitation characteristics. Here we use a satellite simulator for TRMM PR in order to evaluate simulations performed with MPI-ESM (Earth system model of the Max Planck Institute for Meteorology in Hamburg, Germany) performed within the MiKlip project (https://www.fona-miklip.de/, funded by Federal Ministry of Education and Research in Germany). While classical evaluation methods focus on geophysical parameters such as precipitation amounts, the application of the satellite simulator enables an evaluation in the instrument's parameter space thereby reducing uncertainties on the reference side. The CFMIP Observation Simulator Package (COSP) provides a framework for the application of satellite simulators to climate model simulations. The approach requires the introduction of sub-grid cloud and precipitation variability. Radar reflectivities are obtained by applying Mie theory, with the microphysical assumptions being chosen to match the atmosphere component of MPI-ESM (ECHAM6). The results are found to be sensitive to the methods used to distribute the convective precipitation over the sub-grid boxes. Simple parameterization methods are used to introduce sub-grid variability of convective clouds and precipitation. In order to constrain uncertainties a comprehensive comparison with sub-grid scale convective precipitation variability which is deduced from TRMM PR observations is carried out.
Stocco, Andrea; Yamasaki, Brianna L; Prat, Chantel S
2018-04-01
This article describes the data analyzed in the paper "Individual differences in the Simon effect are underpinned by differences in the competitive dynamics in the basal ganglia: An experimental verification and a computational model" (Stocco et al., 2017) [1]. The data includes behavioral results from participants performing three cognitive tasks (Probabilistic Stimulus Selection (Frank et al., 2004) [2], Simon task (Craft and Simon, 1970) [3], and Automated Operation Span (Unsworth et al., 2005) [4]), as well as simulationed traces generated by a computational neurocognitive model that accounts for individual variations in human performance across the tasks. The experimental data encompasses individual data files (in both preprocessed and native output format) as well as group-level summary files. The simulation data includes the entire model code, the results of a full-grid search of the model's parameter space, and the code used to partition the model space and parallelize the simulations. Finally, the repository includes the R scripts used to carry out the statistical analyses reported in the original paper.
Signal decomposition for surrogate modeling of a constrained ultrasonic design space
NASA Astrophysics Data System (ADS)
Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.
2018-04-01
The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.
Inverse design of bulk morphologies in block copolymers using particle swarm optimization
NASA Astrophysics Data System (ADS)
Khadilkar, Mihir; Delaney, Kris; Fredrickson, Glenn
Multiblock polymers are a versatile platform for creating a large range of nanostructured materials with novel morphologies and properties. However, achieving desired structures or property combinations is difficult due to a vast design space comprised of parameters including monomer species, block sequence, block molecular weights and dispersity, copolymer architecture, and binary interaction parameters. Navigating through such vast design spaces to achieve an optimal formulation for a target structure or property set requires an efficient global optimization tool wrapped around a forward simulation technique such as self-consistent field theory (SCFT). We report on such an inverse design strategy utilizing particle swarm optimization (PSO) as the global optimizer and SCFT as the forward prediction engine. To avoid metastable states in forward prediction, we utilize pseudo-spectral variable cell SCFT initiated from a library of defect free seeds of known block copolymer morphologies. We demonstrate that our approach allows for robust identification of block copolymers and copolymer alloys that self-assemble into a targeted structure, optimizing parameters such as block fractions, blend fractions, and Flory chi parameters.
Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D; Volz, Kerstin
2017-06-01
We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space. Copyright © 2017 Elsevier B.V. All rights reserved.
Structural dynamic analysis of the Space Shuttle Main Engine
NASA Technical Reports Server (NTRS)
Scott, L. P.; Jamison, G. T.; Mccutcheon, W. A.; Price, J. M.
1981-01-01
This structural dynamic analysis supports development of the SSME by evaluating components subjected to critical dynamic loads, identifying significant parameters, and evaluating solution methods. Engine operating parameters at both rated and full power levels are considered. Detailed structural dynamic analyses of operationally critical and life limited components support the assessment of engine design modifications and environmental changes. Engine system test results are utilized to verify analytic model simulations. The SSME main chamber injector assembly is an assembly of 600 injector elements which are called LOX posts. The overall LOX post analysis procedure is shown.
Well-Tempered Metadynamics: A Smoothly Converging and Tunable Free-Energy Method
NASA Astrophysics Data System (ADS)
Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele
2008-01-01
We present a method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of an alanine dipeptide free-energy landscape.
Well-tempered metadynamics: a smoothly converging and tunable free-energy method.
Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele
2008-01-18
We present a method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of an alanine dipeptide free-energy landscape.
Experimental Research Regarding The Motion Capacity Of A Robotic Arm
NASA Astrophysics Data System (ADS)
Dumitru, Violeta Cristina
2015-09-01
This paper refers to the development of necessary experiments which obtained dynamic parameters (force, displacement) for a modular mechanism with multiple vertebrae. This mechanism performs functions of inspection and intervention in small spaces. Mechanical structure allows functional parameters to achieve precise movements to an imposed target. Will be analyzed the dynamic of the mechanisms using simulation instruments DimamicaRobot.tst under TestPoint programming environment and the elasticity of the tension cables. It will be changes on the mechanism so that spatial movement of the robotic arm is optimal.
Taylor, Stephen R; Simon, Joseph; Sampson, Laura
2017-05-05
We introduce a technique for gravitational-wave analysis, where Gaussian process regression is used to emulate the strain spectrum of a stochastic background by training on population-synthesis simulations. This leads to direct Bayesian inference on astrophysical parameters. For pulsar timing arrays specifically, we interpolate over the parameter space of supermassive black-hole binary environments, including three-body stellar scattering, and evolving orbital eccentricity. We illustrate our approach on mock data, and assess the prospects for inference with data similar to the NANOGrav 9-yr data release.
Intelligent Space Tube Optimization for speeding ground water remedial design.
Kalwij, Ineke M; Peralta, Richard C
2008-01-01
An innovative Intelligent Space Tube Optimization (ISTO) two-stage approach facilitates solving complex nonlinear flow and contaminant transport management problems. It reduces computational effort of designing optimal ground water remediation systems and strategies for an assumed set of wells. ISTO's stage 1 defines an adaptive mobile space tube that lengthens toward the optimal solution. The space tube has overlapping multidimensional subspaces. Stage 1 generates several strategies within the space tube, trains neural surrogate simulators (NSS) using the limited space tube data, and optimizes using an advanced genetic algorithm (AGA) with NSS. Stage 1 speeds evaluating assumed well locations and combinations. For a large complex plume of solvents and explosives, ISTO stage 1 reaches within 10% of the optimal solution 25% faster than an efficient AGA coupled with comprehensive tabu search (AGCT) does by itself. ISTO input parameters include space tube radius and number of strategies used to train NSS per cycle. Larger radii can speed convergence to optimality for optimizations that achieve it but might increase the number of optimizations reaching it. ISTO stage 2 automatically refines the NSS-AGA stage 1 optimal strategy using heuristic optimization (we used AGCT), without using NSS surrogates. Stage 2 explores the entire solution space. ISTO is applicable for many heuristic optimization settings in which the numerical simulator is computationally intensive, and one would like to reduce that burden.
Understanding space charge and controlling beam loss in high intensity synchrotrons
NASA Astrophysics Data System (ADS)
Cousineau, Sarah M.
Future high intensity synchrotrons will require unprecedented control of beam loss in order to comply with radiation safety regulations and to allow for safe, hands-on maintenance of machine hardware. A major cause of beam loss in high intensity synchrotrons is the space charge force of the beam, which can lead to beam halo and emittance dilution. This dissertation presents a comprehensive study of space charge effects in high intensity synchrotron beams. Experimental measurements taken at the Proton Storage Ring (PSR) in Los Alamos National Laboratory and detailed simulations of the experiments are used to identify and characterize resonances that affect these beams. The collective motion of the beam is extensively studied and is shown to be more relevant than the single particle dynamics in describing the resonance response. The emittance evolution of the PSR beam and methods for reducing the space-charge-induced emittance growth are addressed. In a separate study, the emittance evolution of an intense space charge beam is experimentally measured at the Cooler Injector Synchrotron (CIS) at Indiana University. This dissertation also investigates the sophisticated two-stage collimation system of the future Spallation Neutron Source (SNS) high intensity accumulator ring. A realistic Monte-Carlo collimation simulation is developed and used to optimize the SNS ring collimation system parameters. The finalized parameters and predicted beam loss distribution around the ring are presented. The collimators will additionally be used in conjunction with a set of fast kickers to remove the beam from the gap region before the rise of the extraction magnets. The gap cleaning process is optimized and the cleaning efficiency versus momentum spread of the beam is examined.
A transformed path integral approach for solution of the Fokker-Planck equation
NASA Astrophysics Data System (ADS)
Subramaniam, Gnana M.; Vedula, Prakash
2017-10-01
A novel path integral (PI) based method for solution of the Fokker-Planck equation is presented. The proposed method, termed the transformed path integral (TPI) method, utilizes a new formulation for the underlying short-time propagator to perform the evolution of the probability density function (PDF) in a transformed computational domain where a more accurate representation of the PDF can be ensured. The new formulation, based on a dynamic transformation of the original state space with the statistics of the PDF as parameters, preserves the non-negativity of the PDF and incorporates short-time properties of the underlying stochastic process. New update equations for the state PDF in a transformed space and the parameters of the transformation (including mean and covariance) that better accommodate nonlinearities in drift and non-Gaussian behavior in distributions are proposed (based on properties of the SDE). Owing to the choice of transformation considered, the proposed method maps a fixed grid in transformed space to a dynamically adaptive grid in the original state space. The TPI method, in contrast to conventional methods such as Monte Carlo simulations and fixed grid approaches, is able to better represent the distributions (especially the tail information) and better address challenges in processes with large diffusion, large drift and large concentration of PDF. Additionally, in the proposed TPI method, error bounds on the probability in the computational domain can be obtained using the Chebyshev's inequality. The benefits of the TPI method over conventional methods are illustrated through simulations of linear and nonlinear drift processes in one-dimensional and multidimensional state spaces. The effects of spatial and temporal grid resolutions as well as that of the diffusion coefficient on the error in the PDF are also characterized.
Dupas, Laura; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre; Boulant, Nicolas
2015-06-01
The spokes method combined with parallel transmission is a promising technique to mitigate the B1(+) inhomogeneity at ultra-high field in 2D imaging. To date however, the spokes placement optimization combined with the magnitude least squares pulse design has never been done in direct conjunction with the explicit Specific Absorption Rate (SAR) and hardware constraints. In this work, the joint optimization of 2-spoke trajectories and RF subpulse weights is performed under these constraints explicitly and in the small tip angle regime. The problem is first considerably simplified by making the observation that only the vector between the 2 spokes is relevant in the magnitude least squares cost-function, thereby reducing the size of the parameter space and allowing a more exhaustive search. The algorithm starts from a set of initial k-space candidates and performs in parallel for all of them optimizations of the RF subpulse weights and the k-space locations simultaneously, under explicit SAR and power constraints, using an active-set algorithm. The dimensionality of the spoke placement parameter space being low, the RF pulse performance is computed for every location in k-space to study the robustness of the proposed approach with respect to initialization, by looking at the probability to converge towards a possible global minimum. Moreover, the optimization of the spoke placement is repeated with an increased pulse bandwidth in order to investigate the impact of the constraints on the result. Bloch simulations and in vivo T2(∗)-weighted images acquired at 7 T validate the approach. The algorithm returns simulated normalized root mean square errors systematically smaller than 5% in 10 s. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Santos, Léonard; Thirel, Guillaume; Perrin, Charles
2018-04-01
In many conceptual rainfall-runoff models, the water balance differential equations are not explicitly formulated. These differential equations are solved sequentially by splitting the equations into terms that can be solved analytically with a technique called operator splitting
. As a result, only the solutions of the split equations are used to present the different models. This article provides a methodology to make the governing water balance equations of a bucket-type rainfall-runoff model explicit and to solve them continuously. This is done by setting up a comprehensive state-space representation of the model. By representing it in this way, the operator splitting, which makes the structural analysis of the model more complex, could be removed. In this state-space representation, the lag functions (unit hydrographs), which are frequent in rainfall-runoff models and make the resolution of the representation difficult, are first replaced by a so-called Nash cascade
and then solved with a robust numerical integration technique. To illustrate this methodology, the GR4J model is taken as an example. The substitution of the unit hydrographs with a Nash cascade, even if it modifies the model behaviour when solved using operator splitting, does not modify it when the state-space representation is solved using an implicit integration technique. Indeed, the flow time series simulated by the new representation of the model are very similar to those simulated by the classic model. The use of a robust numerical technique that approximates a continuous-time model also improves the lag parameter consistency across time steps and provides a more time-consistent model with time-independent parameters.
Cosmic velocity-gravity relation in redshift space
NASA Astrophysics Data System (ADS)
Colombi, Stéphane; Chodorowski, Michał J.; Teyssier, Romain
2007-02-01
We propose a simple way to estimate the parameter β ~= Ω0.6/b from 3D galaxy surveys, where Ω is the non-relativistic matter-density parameter of the Universe and b is the bias between the galaxy distribution and the total matter distribution. Our method consists in measuring the relation between the cosmological velocity and gravity fields, and thus requires peculiar velocity measurements. The relation is measured directly in redshift space, so there is no need to reconstruct the density field in real space. In linear theory, the radial components of the gravity and velocity fields in redshift space are expected to be tightly correlated, with a slope given, in the distant observer approximation, by We test extensively this relation using controlled numerical experiments based on a cosmological N-body simulation. To perform the measurements, we propose a new and rather simple adaptive interpolation scheme to estimate the velocity and the gravity field on a grid. One of the most striking results is that non-linear effects, including `fingers of God', affect mainly the tails of the joint probability distribution function (PDF) of the velocity and gravity field: the 1-1.5 σ region around the maximum of the PDF is dominated by the linear theory regime, both in real and redshift space. This is understood explicitly by using the spherical collapse model as a proxy of non-linear dynamics. Applications of the method to real galaxy catalogues are discussed, including a preliminary investigation on homogeneous (volume-limited) `galaxy' samples extracted from the simulation with simple prescriptions based on halo and substructure identification, to quantify the effects of the bias between the galaxy distribution and the total matter distribution, as well as the effects of shot noise.
A real-time digital computer program for the simulation of automatic spacecraft reentries
NASA Technical Reports Server (NTRS)
Kaylor, J. T.; Powell, L. F.; Powell, R. W.
1977-01-01
The automatic reentry flight dynamics simulator, a nonlinear, six-degree-of-freedom simulation, digital computer program, has been developed. The program includes a rotating, oblate earth model for accurate navigation calculations and contains adjustable gains on the aerodynamic stability and control parameters. This program uses a real-time simulation system and is designed to examine entries of vehicles which have constant mass properties whose attitudes are controlled by both aerodynamic surfaces and reaction control thrusters, and which have automatic guidance and control systems. The program has been used to study the space shuttle orbiter entry. This report includes descriptions of the equations of motion used, the control and guidance schemes that were implemented, the program flow and operation, and the hardware involved.
An improved cooperative adaptive cruise control (CACC) algorithm considering invalid communication
NASA Astrophysics Data System (ADS)
Wang, Pangwei; Wang, Yunpeng; Yu, Guizhen; Tang, Tieqiao
2014-05-01
For the Cooperative Adaptive Cruise Control (CACC) Algorithm, existing research studies mainly focus on how inter-vehicle communication can be used to develop CACC controller, the influence of the communication delays and lags of the actuators to the string stability. However, whether the string stability can be guaranteed when inter-vehicle communication is invalid partially has hardly been considered. This paper presents an improved CACC algorithm based on the sliding mode control theory and analyses the range of CACC controller parameters to maintain string stability. A dynamic model of vehicle spacing deviation in a platoon is then established, and the string stability conditions under improved CACC are analyzed. Unlike the traditional CACC algorithms, the proposed algorithm can ensure the functionality of the CACC system even if inter-vehicle communication is partially invalid. Finally, this paper establishes a platoon of five vehicles to simulate the improved CACC algorithm in MATLAB/Simulink, and the simulation results demonstrate that the improved CACC algorithm can maintain the string stability of a CACC platoon through adjusting the controller parameters and enlarging the spacing to prevent accidents. With guaranteed string stability, the proposed CACC algorithm can prevent oscillation of vehicle spacing and reduce chain collision accidents under real-world circumstances. This research proposes an improved CACC algorithm, which can guarantee the string stability when inter-vehicle communication is invalid.
NASA Astrophysics Data System (ADS)
Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna
2016-05-01
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.