NASA Technical Reports Server (NTRS)
Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.
2000-01-01
First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.
Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling.
Perdikaris, P; Raissi, M; Damianou, A; Lawrence, N D; Karniadakis, G E
2017-02-01
Multi-fidelity modelling enables accurate inference of quantities of interest by synergistically combining realizations of low-cost/low-fidelity models with a small set of high-fidelity observations. This is particularly effective when the low- and high-fidelity models exhibit strong correlations, and can lead to significant computational gains over approaches that solely rely on high-fidelity models. However, in many cases of practical interest, low-fidelity models can only be well correlated to their high-fidelity counterparts for a specific range of input parameters, and potentially return wrong trends and erroneous predictions if probed outside of their validity regime. Here we put forth a probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends. This introduces a new class of multi-fidelity information fusion algorithms that provide a fundamental extension to the existing linear autoregressive methodologies, while still maintaining the same algorithmic complexity and overall computational cost. The performance of the proposed methods is tested in several benchmark problems involving both synthetic and real multi-fidelity datasets from computational fluid dynamics simulations.
High-Fidelity Simulations of Electromagnetic Propagation and RF Communication Systems
2017-05-01
addition to high -fidelity RF propagation modeling, lower-fidelity mod- els, which are less computationally burdensome, are available via a C++ API...expensive to perform, requiring roughly one hour of computer time with 36 available cores and ray tracing per- formed by a single high -end GPU...ER D C TR -1 7- 2 Military Engineering Applied Research High -Fidelity Simulations of Electromagnetic Propagation and RF Communication
Benefits of computer screen-based simulation in learning cardiac arrest procedures.
Bonnetain, Elodie; Boucheix, Jean-Michel; Hamet, Maël; Freysz, Marc
2010-07-01
What is the best way to train medical students early so that they acquire basic skills in cardiopulmonary resuscitation as effectively as possible? Studies have shown the benefits of high-fidelity patient simulators, but have also demonstrated their limits. New computer screen-based multimedia simulators have fewer constraints than high-fidelity patient simulators. In this area, as yet, there has been no research on the effectiveness of transfer of learning from a computer screen-based simulator to more realistic situations such as those encountered with high-fidelity patient simulators. We tested the benefits of learning cardiac arrest procedures using a multimedia computer screen-based simulator in 28 Year 2 medical students. Just before the end of the traditional resuscitation course, we compared two groups. An experiment group (EG) was first asked to learn to perform the appropriate procedures in a cardiac arrest scenario (CA1) in the computer screen-based learning environment and was then tested on a high-fidelity patient simulator in another cardiac arrest simulation (CA2). While the EG was learning to perform CA1 procedures in the computer screen-based learning environment, a control group (CG) actively continued to learn cardiac arrest procedures using practical exercises in a traditional class environment. Both groups were given the same amount of practice, exercises and trials. The CG was then also tested on the high-fidelity patient simulator for CA2, after which it was asked to perform CA1 using the computer screen-based simulator. Performances with both simulators were scored on a precise 23-point scale. On the test on a high-fidelity patient simulator, the EG trained with a multimedia computer screen-based simulator performed significantly better than the CG trained with traditional exercises and practice (16.21 versus 11.13 of 23 possible points, respectively; p<0.001). Computer screen-based simulation appears to be effective in preparing learners to use high-fidelity patient simulators, which present simulations that are closer to real-life situations.
Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers
NASA Technical Reports Server (NTRS)
Guruswamy, Guru; VanDalsem, William (Technical Monitor)
1994-01-01
Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.
NASA Astrophysics Data System (ADS)
Takemiya, Tetsushi
In modern aerospace engineering, the physics-based computational design method is becoming more important, as it is more efficient than experiments and because it is more suitable in designing new types of aircraft (e.g., unmanned aerial vehicles or supersonic business jets) than the conventional design method, which heavily relies on historical data. To enhance the reliability of the physics-based computational design method, researchers have made tremendous efforts to improve the fidelity of models. However, high-fidelity models require longer computational time, so the advantage of efficiency is partially lost. This problem has been overcome with the development of variable fidelity optimization (VFO). In VFO, different fidelity models are simultaneously employed in order to improve the speed and the accuracy of convergence in an optimization process. Among the various types of VFO methods, one of the most promising methods is the approximation management framework (AMF). In the AMF, objective and constraint functions of a low-fidelity model are scaled at a design point so that the scaled functions, which are referred to as "surrogate functions," match those of a high-fidelity model. Since scaling functions and the low-fidelity model constitutes surrogate functions, evaluating the surrogate functions is faster than evaluating the high-fidelity model. Therefore, in the optimization process, in which gradient-based optimization is implemented and thus many function calls are required, the surrogate functions are used instead of the high-fidelity model to obtain a new design point. The best feature of the AMF is that it may converge to a local optimum of the high-fidelity model in much less computational time than the high-fidelity model. However, through literature surveys and implementations of the AMF, the author xx found that (1) the AMF is very vulnerable when the computational analysis models have numerical noise, which is very common in high-fidelity models, and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite differentiation (FD) method, and then, the Robust AMF is implemented along with the sequential quadratic programming (SQP) optimization method with only high-fidelity models. The proposed AD method computes derivatives more accurately and faster than the FD method, and the Robust AMF successfully optimizes shapes of the airfoil and the wing in a much shorter time than SQP with only high-fidelity models. These results clearly show the effectiveness of the Robust AMF. Finally, the feasibility of reducing computational time for calculating derivatives and the necessity of AMF with an optimum design point always in the feasible region are discussed as future work.
Lattice Boltzmann for Airframe Noise Predictions
NASA Technical Reports Server (NTRS)
Barad, Michael; Kocheemoolayil, Joseph; Kiris, Cetin
2017-01-01
Increase predictive use of High-Fidelity Computational Aero- Acoustics (CAA) capabilities for NASA's next generation aviation concepts. CFD has been utilized substantially in analysis and design for steady-state problems (RANS). Computational resources are extremely challenged for high-fidelity unsteady problems (e.g. unsteady loads, buffet boundary, jet and installation noise, fan noise, active flow control, airframe noise, etc) ü Need novel techniques for reducing the computational resources consumed by current high-fidelity CAA Need routine acoustic analysis of aircraft components at full-scale Reynolds number from first principles Need an order of magnitude reduction in wall time to solution!
Multi-fidelity stochastic collocation method for computation of statistical moments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu; Linebarger, Erin M., E-mail: aerinline@sci.utah.edu; Xiu, Dongbin, E-mail: xiu.16@osu.edu
We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.
Surrogate based wind farm layout optimization using manifold mapping
NASA Astrophysics Data System (ADS)
Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester
2016-09-01
High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.
Tokunaga, Yuuki; Kuwashiro, Shin; Yamamoto, Takashi; Koashi, Masato; Imoto, Nobuyuki
2008-05-30
We experimentally demonstrate a simple scheme for generating a four-photon entangled cluster state with fidelity over 0.860+/-0.015. We show that the fidelity is high enough to guarantee that the produced state is distinguished from Greenberger-Horne-Zeilinger, W, and Dicke types of genuine four-qubit entanglement. We also demonstrate basic operations of one-way quantum computing using the produced state and show that the output state fidelities surpass classical bounds, which indicates that the entanglement in the produced state essentially contributes to the quantum operation.
Mejía, Vilma; Gonzalez, Carlos; Delfino, Alejandro E; Altermatt, Fernando R; Corvetto, Marcia A
The primary purpose of this study was to compare the effect of high fidelity simulation versus a computer-based case solving self-study, in skills acquisition about malignant hyperthermia on first year anesthesiology residents. After institutional ethical committee approval, 31 first year anesthesiology residents were enrolled in this prospective randomized single-blinded study. Participants were randomized to either a High Fidelity Simulation Scenario or a computer-based Case Study about malignant hyperthermia. After the intervention, all subjects' performance in was assessed through a high fidelity simulation scenario using a previously validated assessment rubric. Additionally, knowledge tests and a satisfaction survey were applied. Finally, a semi-structured interview was done to assess self-perception of reasoning process and decision-making. 28 first year residents finished successfully the study. Resident's management skill scores were globally higher in High Fidelity Simulation versus Case Study, however they were significant in 4 of the 8 performance rubric elements: recognize signs and symptoms (p = 0.025), prioritization of initial actions of management (p = 0.003), recognize complications (p = 0.025) and communication (p = 0.025). Average scores from pre- and post-test knowledge questionnaires improved from 74% to 85% in the High Fidelity Simulation group, and decreased from 78% to 75% in the Case Study group (p = 0.032). Regarding the qualitative analysis, there was no difference in factors influencing the student's process of reasoning and decision-making with both teaching strategies. Simulation-based training with a malignant hyperthermia high-fidelity scenario was superior to computer-based case study, improving knowledge and skills in malignant hyperthermia crisis management, with a very good satisfaction level in anesthesia residents. Copyright © 2018 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights reserved.
NASA Astrophysics Data System (ADS)
Bryson, Dean Edward
A model's level of fidelity may be defined as its accuracy in faithfully reproducing a quantity or behavior of interest of a real system. Increasing the fidelity of a model often goes hand in hand with increasing its cost in terms of time, money, or computing resources. The traditional aircraft design process relies upon low-fidelity models for expedience and resource savings. However, the reduced accuracy and reliability of low-fidelity tools often lead to the discovery of design defects or inadequacies late in the design process. These deficiencies result either in costly changes or the acceptance of a configuration that does not meet expectations. The unknown opportunity cost is the discovery of superior vehicles that leverage phenomena unknown to the designer and not illuminated by low-fidelity tools. Multifidelity methods attempt to blend the increased accuracy and reliability of high-fidelity models with the reduced cost of low-fidelity models. In building surrogate models, where mathematical expressions are used to cheaply approximate the behavior of costly data, low-fidelity models may be sampled extensively to resolve the underlying trend, while high-fidelity data are reserved to correct inaccuracies at key locations. Similarly, in design optimization a low-fidelity model may be queried many times in the search for new, better designs, with a high-fidelity model being exercised only once per iteration to evaluate the candidate design. In this dissertation, a new multifidelity, gradient-based optimization algorithm is proposed. It differs from the standard trust region approach in several ways, stemming from the new method maintaining an approximation of the inverse Hessian, that is the underlying curvature of the design problem. Whereas the typical trust region approach performs a full sub-optimization using the low-fidelity model at every iteration, the new technique finds a suitable descent direction and focuses the search along it, reducing the number of low-fidelity evaluations required. This narrowing of the search domain also alleviates the burden on the surrogate model corrections between the low- and high-fidelity data. Rather than requiring the surrogate to be accurate in a hyper-volume bounded by the trust region, the model needs only to be accurate along the forward-looking search direction. Maintaining the approximate inverse Hessian also allows the multifidelity algorithm to revert to high-fidelity optimization at any time. In contrast, the standard approach has no memory of the previously-computed high-fidelity data. The primary disadvantage of the proposed algorithm is that it may require modifications to the optimization software, whereas standard optimizers may be used as black-box drivers in the typical trust region method. A multifidelity, multidisciplinary simulation of aeroelastic vehicle performance is developed to demonstrate the optimization method. The numerical physics models include body-fitted Euler computational fluid dynamics; linear, panel aerodynamics; linear, finite-element computational structural mechanics; and reduced, modal structural bases. A central element of the multifidelity, multidisciplinary framework is a shared parametric, attributed geometric representation that ensures the analysis inputs are consistent between disciplines and fidelities. The attributed geometry also enables the transfer of data between disciplines. The new optimization algorithm, a standard trust region approach, and a single-fidelity quasi-Newton method are compared for a series of analytic test functions, using both polynomial chaos expansions and kriging to correct discrepancies between fidelity levels of data. In the aggregate, the new method requires fewer high-fidelity evaluations than the trust region approach in 51% of cases, and the same number of evaluations in 18%. The new approach also requires fewer low-fidelity evaluations, by up to an order of magnitude, in almost all cases. The efficacy of both multifidelity methods compared to single-fidelity optimization depends significantly on the behavior of the high-fidelity model and the quality of the low-fidelity approximation, though savings are realized in a large number of cases. The multifidelity algorithm is also compared to the single-fidelity quasi-Newton method for complex aeroelastic simulations. The vehicle design problem includes variables for planform shape, structural sizing, and cruise condition with constraints on trim and structural stresses. Considering the objective function reduction versus computational expenditure, the multifidelity process performs better in three of four cases in early iterations. However, the enforcement of a contracting trust region slows the multifidelity progress. Even so, leveraging the approximate inverse Hessian, the optimization can be seamlessly continued using high-fidelity data alone. Ultimately, the proposed new algorithm produced better designs in all four cases. Investigating the return on investment in terms of design improvement per computational hour confirms that the multifidelity advantage is greatest in early iterations, and managing the transition to high-fidelity optimization is critical.
A multi-fidelity framework for physics based rotor blade simulation and optimization
NASA Astrophysics Data System (ADS)
Collins, Kyle Brian
New helicopter rotor designs are desired that offer increased efficiency, reduced vibration, and reduced noise. Rotor Designers in industry need methods that allow them to use the most accurate simulation tools available to search for these optimal designs. Computer based rotor analysis and optimization have been advanced by the development of industry standard codes known as "comprehensive" rotorcraft analysis tools. These tools typically use table look-up aerodynamics, simplified inflow models and perform aeroelastic analysis using Computational Structural Dynamics (CSD). Due to the simplified aerodynamics, most design studies are performed varying structural related design variables like sectional mass and stiffness. The optimization of shape related variables in forward flight using these tools is complicated and results are viewed with skepticism because rotor blade loads are not accurately predicted. The most accurate methods of rotor simulation utilize Computational Fluid Dynamics (CFD) but have historically been considered too computationally intensive to be used in computer based optimization, where numerous simulations are required. An approach is needed where high fidelity CFD rotor analysis can be utilized in a shape variable optimization problem with multiple objectives. Any approach should be capable of working in forward flight in addition to hover. An alternative is proposed and founded on the idea that efficient hybrid CFD methods of rotor analysis are ready to be used in preliminary design. In addition, the proposed approach recognizes the usefulness of lower fidelity physics based analysis and surrogate modeling. Together, they are used with high fidelity analysis in an intelligent process of surrogate model building of parameters in the high fidelity domain. Closing the loop between high and low fidelity analysis is a key aspect of the proposed approach. This is done by using information from higher fidelity analysis to improve predictions made with lower fidelity models. This thesis documents the development of automated low and high fidelity physics based rotor simulation frameworks. The low fidelity framework uses a comprehensive code with simplified aerodynamics. The high fidelity model uses a parallel processor capable CFD/CSD methodology. Both low and high fidelity frameworks include an aeroacoustic simulation for prediction of noise. A synergistic process is developed that uses both the low and high fidelity frameworks together to build approximate models of important high fidelity metrics as functions of certain design variables. To test the process, a 4-bladed hingeless rotor model is used as a baseline. The design variables investigated include tip geometry and spanwise twist distribution. Approximation models are built for metrics related to rotor efficiency and vibration using the results from 60+ high fidelity (CFD/CSD) experiments and 400+ low fidelity experiments. Optimization using the approximation models found the Pareto Frontier anchor points, or the design having maximum rotor efficiency and the design having minimum vibration. Various Pareto generation methods are used to find designs on the frontier between these two anchor designs. When tested in the high fidelity framework, the Pareto anchor designs are shown to be very good designs when compared with other designs from the high fidelity database. This provides evidence that the process proposed has merit. Ultimately, this process can be utilized by industry rotor designers with their existing tools to bring high fidelity analysis into the preliminary design stage of rotors. In conclusion, the methods developed and documented in this thesis have made several novel contributions. First, an automated high fidelity CFD based forward flight simulation framework has been built for use in preliminary design optimization. The framework was built around an integrated, parallel processor capable CFD/CSD/AA process. Second, a novel method of building approximate models of high fidelity parameters has been developed. The method uses a combination of low and high fidelity results and combines Design of Experiments, statistical effects analysis, and aspects of approximation model management. And third, the determination of rotor blade shape variables through optimization using CFD based analysis in forward flight has been performed. This was done using the high fidelity CFD/CSD/AA framework and method mentioned above. While the low and high fidelity predictions methods used in the work still have inaccuracies that can affect the absolute levels of the results, a framework has been successfully developed and demonstrated that allows for an efficient process to improve rotor blade designs in terms of a selected choice of objective function(s). Using engineering judgment, this methodology could be applied today to investigate opportunities to improve existing designs. With improvements in the low and high fidelity prediction components that will certainly occur, this framework could become a powerful tool for future rotorcraft design work. (Abstract shortened by UMI.)
Economical Unsteady High-Fidelity Aerodynamics for Structural Optimization with a Flutter Constraint
NASA Technical Reports Server (NTRS)
Bartels, Robert E.; Stanford, Bret K.
2017-01-01
Structural optimization with a flutter constraint for a vehicle designed to fly in the transonic regime is a particularly difficult task. In this speed range, the flutter boundary is very sensitive to aerodynamic nonlinearities, typically requiring high-fidelity Navier-Stokes simulations. However, the repeated application of unsteady computational fluid dynamics to guide an aeroelastic optimization process is very computationally expensive. This expense has motivated the development of methods that incorporate aspects of the aerodynamic nonlinearity, classical tools of flutter analysis, and more recent methods of optimization. While it is possible to use doublet lattice method aerodynamics, this paper focuses on the use of an unsteady high-fidelity aerodynamic reduced order model combined with successive transformations that allows for an economical way of utilizing high-fidelity aerodynamics in the optimization process. This approach is applied to the common research model wing structural design. As might be expected, the high-fidelity aerodynamics produces a heavier wing than that optimized with doublet lattice aerodynamics. It is found that the optimized lower skin of the wing using high-fidelity aerodynamics differs significantly from that using doublet lattice aerodynamics.
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
DDDAMS-based Urban Surveillance and Crowd Control via UAVs and UGVs
2015-12-04
for crowd dynamics modeling by incorporating multi-resolution data, where a grid-based method is used to model crowd motion with UAVs’ low -resolution...information and more computational intensive (and time-consuming). Given that the deployment of fidelity selection results in simulation faces computational... low fidelity information FOV y (A) DR x (A) DR y (A) Not detected high fidelity information Table 1: Parameters for UAV and UGV for their detection
NASA Astrophysics Data System (ADS)
Horton, Scott
This research study investigated the effects of high fidelity graphics on both learning and presence, or the "sense of being there," inside a Virtual Learning Environment (VLE). Four versions of a VLE on the subject of the element mercury were created, each with a different combination of high and low fidelity polygon models and high and low fidelity shaders. A total of 76 college age (18+ years of age) participants were randomly assigned to one of the four conditions. The participants interacted with the VLE and then completed several posttest measures on learning, presence, and attitudes towards the VLE experience. Demographic information was also collected, including age, computer gameplay experience, number of virtual environments interacted with, gender and time spent in this virtual environment. The data was analyzed as a 2 x 2 between subjects ANOVA. The main effects of shader fidelity and polygon fidelity were both non-significant for both learning and all presence subscales inside the VLE. In addition, there was no significant interaction between shader fidelity and model fidelity. However, there were two significant results on the supplementary variables. First, gender was found to have a significant main effect on all the presence subscales. Females reported higher average levels of presence than their male counterparts. Second, gameplay hours, or the number of hours a participant played computer games per week, also had a significant main effect on participant score on the learning measure. The participants who reported playing 15+ hours of computer games per week, the highest amount of time in the variable, had the highest score as a group on the mercury learning measure while those participants that played 1-5 hours per week had the lowest scores.
NASA Astrophysics Data System (ADS)
Shaw, Amelia R.; Smith Sawyer, Heather; LeBoeuf, Eugene J.; McDonald, Mark P.; Hadjerioua, Boualem
2017-11-01
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2 is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. The reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.
Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.; ...
2017-10-24
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less
Creation of a Rapid High-Fidelity Aerodynamics Module for a Multidisciplinary Design Environment
NASA Technical Reports Server (NTRS)
Srinivasan, Muktha; Whittecar, William; Edwards, Stephen; Mavris, Dimitri N.
2012-01-01
In the traditional aerospace vehicle design process, each successive design phase is accompanied by an increment in the modeling fidelity of the disciplinary analyses being performed. This trend follows a corresponding shrinking of the design space as more and more design decisions are locked in. The correlated increase in knowledge about the design and decrease in design freedom occurs partly because increases in modeling fidelity are usually accompanied by significant increases in the computational expense of performing the analyses. When running high fidelity analyses, it is not usually feasible to explore a large number of variations, and so design space exploration is reserved for conceptual design, and higher fidelity analyses are run only once a specific point design has been selected to carry forward. The designs produced by this traditional process have been recognized as being limited by the uncertainty that is present early on due to the use of lower fidelity analyses. For example, uncertainty in aerodynamics predictions produces uncertainty in trajectory optimization, which can impact overall vehicle sizing. This effect can become more significant when trajectories are being shaped by active constraints. For example, if an optimal trajectory is running up against a normal load factor constraint, inaccuracies in the aerodynamic coefficient predictions can cause a feasible trajectory to be considered infeasible, or vice versa. For this reason, a trade must always be performed between the desired fidelity and the resources available. Apart from this trade between fidelity and computational expense, it is very desirable to use higher fidelity analyses earlier in the design process. A large body of work has been performed to this end, led by efforts in the area of surrogate modeling. In surrogate modeling, an up-front investment is made by running a high fidelity code over a Design of Experiments (DOE); once completed, the DOE data is used to create a surrogate model, which captures the relationships between input variables and responses into regression equations. Depending on the dimensionality of the problem and the fidelity of the code for which a surrogate model is being created, the initial DOE can itself be computationally prohibitive to run. Cokriging, a modeling approach from the field of geostatistics, provides a desirable compromise between computational expense and fidelity. To do this, cokriging leverages a large body of data generated by a low fidelity analysis, combines it with a smaller set of data from a higher fidelity analysis, and creates a kriging surrogate model with prediction fidelity approaching that of the higher fidelity analysis. When integrated into a multidisciplinary environment, a disciplinary analysis module employing cokriging can raise the analysis fidelity without drastically impacting the expense of design iterations. This is demonstrated through the creation of an aerodynamics analysis module in NASA s OpenMDAO framework. Aerodynamic analyses including Missile DATCOM, APAS, and USM3D are leveraged to create high fidelity aerodynamics decks for parametric vehicle geometries, which are created in NASA s Vehicle Sketch Pad (VSP). Several trade studies are performed to examine the achieved level of model fidelity, and the overall impact to vehicle design is quantified.
McCormack, Jane; Baker, Elise; Masso, Sarah; Crowe, Kathryn; McLeod, Sharynne; Wren, Yvonne; Roulstone, Sue
2017-06-01
Implementation fidelity refers to the degree to which an intervention or programme adheres to its original design. This paper examines implementation fidelity in the Sound Start Study, a clustered randomised controlled trial of computer-assisted support for children with speech sound disorders (SSD). Sixty-three children with SSD in 19 early childhood centres received computer-assisted support (Phoneme Factory Sound Sorter [PFSS] - Australian version). Educators facilitated the delivery of PFSS targeting phonological error patterns identified by a speech-language pathologist. Implementation data were gathered via (1) the computer software, which recorded when and how much intervention was completed over 9 weeks; (2) educators' records of practice sessions; and (3) scoring of fidelity (intervention procedure, competence and quality of delivery) from videos of intervention sessions. Less than one-third of children received the prescribed number of days of intervention, while approximately one-half participated in the prescribed number of intervention plays. Computer data differed from educators' data for total number of days and plays in which children participated; the degree of match was lower as data became more specific. Fidelity to intervention procedures, competency and quality of delivery was high. Implementation fidelity may impact intervention outcomes and so needs to be measured in intervention research; however, the way in which it is measured may impact on data.
Computing Project, Marc develops high-fidelity turbulence models to enhance simulation accuracy and efficient numerical algorithms for future high performance computing hardware architectures. Research Interests High performance computing High order numerical methods for computational fluid dynamics Fluid
Experimental magic state distillation for fault-tolerant quantum computing.
Souza, Alexandre M; Zhang, Jingfu; Ryan, Colm A; Laflamme, Raymond
2011-01-25
Any physical quantum device for quantum information processing (QIP) is subject to errors in implementation. In order to be reliable and efficient, quantum computers will need error-correcting or error-avoiding methods. Fault-tolerance achieved through quantum error correction will be an integral part of quantum computers. Of the many methods that have been discovered to implement it, a highly successful approach has been to use transversal gates and specific initial states. A critical element for its implementation is the availability of high-fidelity initial states, such as |0〉 and the 'magic state'. Here, we report an experiment, performed in a nuclear magnetic resonance (NMR) quantum processor, showing sufficient quantum control to improve the fidelity of imperfect initial magic states by distilling five of them into one with higher fidelity.
Gallo, Carlos; Pantin, Hilda; Villamar, Juan; Prado, Guillermo; Tapia, Maria; Ogihara, Mitsunori; Cruden, Gracelyn; Brown, C Hendricks
2015-09-01
Careful fidelity monitoring and feedback are critical to implementing effective interventions. A wide range of procedures exist to assess fidelity; most are derived from observational assessments (Schoenwald and Garland, Psycholog Assess 25:146-156, 2013). However, these fidelity measures are resource intensive for research teams in efficacy/effectiveness trials, and are often unattainable or unmanageable for the host organization to rate when the program is implemented on a large scale. We present a first step towards automated processing of linguistic patterns in fidelity monitoring of a behavioral intervention using an innovative mixed methods approach to fidelity assessment that uses rule-based, computational linguistics to overcome major resource burdens. Data come from an effectiveness trial of the Familias Unidas intervention, an evidence-based, family-centered preventive intervention found to be efficacious in reducing conduct problems, substance use and HIV sexual risk behaviors among Hispanic youth. This computational approach focuses on "joining," which measures the quality of the working alliance of the facilitator with the family. Quantitative assessments of reliability are provided. Kappa scores between a human rater and a machine rater for the new method for measuring joining reached 0.83. Early findings suggest that this approach can reduce the high cost of fidelity measurement and the time delay between fidelity assessment and feedback to facilitators; it also has the potential for improving the quality of intervention fidelity ratings.
Gallo, Carlos; Pantin, Hilda; Villamar, Juan; Prado, Guillermo; Tapia, Maria; Ogihara, Mitsunori; Cruden, Gracelyn; Brown, C Hendricks
2014-01-01
Careful fidelity monitoring and feedback are critical to implementing effective interventions. A wide range of procedures exist to assess fidelity; most are derived from observational assessments (Schoenwald et al, 2013). However, these fidelity measures are resource intensive for research teams in efficacy/effectiveness trials, and are often unattainable or unmanageable for the host organization to rate when the program is implemented on a large scale. We present a first step towards automated processing of linguistic patterns in fidelity monitoring of a behavioral intervention using an innovative mixed methods approach to fidelity assessment that uses rule-based, computational linguistics to overcome major resource burdens. Data come from an effectiveness trial of the Familias Unidas intervention, an evidence-based, family-centered preventive intervention found to be efficacious in reducing conduct problems, substance use and HIV sexual risk behaviors among Hispanic youth. This computational approach focuses on “joining,” which measures the quality of the working alliance of the facilitator with the family. Quantitative assessments of reliability are provided. Kappa scores between a human rater and a machine rater for the new method for measuring joining reached .83. Early findings suggest that this approach can reduce the high cost of fidelity measurement and the time delay between fidelity assessment and feedback to facilitators; it also has the potential for improving the quality of intervention fidelity ratings. PMID:24500022
Computer image generation: Reconfigurability as a strategy in high fidelity space applications
NASA Technical Reports Server (NTRS)
Bartholomew, Michael J.
1989-01-01
The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
2002-01-01
A high-fidelity simulation of a commercial turbofan engine has been created as part of the Numerical Propulsion System Simulation Project. The high-fidelity computer simulation utilizes computer models that were developed at NASA Glenn Research Center in cooperation with turbofan engine manufacturers. The average-passage (APNASA) Navier-Stokes based viscous flow computer code is used to simulate the 3D flow in the compressors and turbines of the advanced commercial turbofan engine. The 3D National Combustion Code (NCC) is used to simulate the flow and chemistry in the advanced aircraft combustor. The APNASA turbomachinery code and the NCC combustor code exchange boundary conditions at the interface planes at the combustor inlet and exit. This computer simulation technique can evaluate engine performance at steady operating conditions. The 3D flow models provide detailed knowledge of the airflow within the fan and compressor, the high and low pressure turbines, and the flow and chemistry within the combustor. The models simulate the performance of the engine at operating conditions that include sea level takeoff and the altitude cruise condition.
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.
Multi-fidelity methods for uncertainty quantification in transport problems
NASA Astrophysics Data System (ADS)
Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.
2016-12-01
We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.
Practical experimental certification of computational quantum gates using a twirling procedure.
Moussa, Osama; da Silva, Marcus P; Ryan, Colm A; Laflamme, Raymond
2012-08-17
Because of the technical difficulty of building large quantum computers, it is important to be able to estimate how faithful a given implementation is to an ideal quantum computer. The common approach of completely characterizing the computation process via quantum process tomography requires an exponential amount of resources, and thus is not practical even for relatively small devices. We solve this problem by demonstrating that twirling experiments previously used to characterize the average fidelity of quantum memories efficiently can be easily adapted to estimate the average fidelity of the experimental implementation of important quantum computation processes, such as unitaries in the Clifford group, in a practical and efficient manner with applicability in current quantum devices. Using this procedure, we demonstrate state-of-the-art coherent control of an ensemble of magnetic moments of nuclear spins in a single crystal solid by implementing the encoding operation for a 3-qubit code with only a 1% degradation in average fidelity discounting preparation and measurement errors. We also highlight one of the advances that was instrumental in achieving such high fidelity control.
Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.
First-Order Frameworks for Managing Models in Engineering Optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natlia M.; Lewis, Robert Michael
2000-01-01
Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.
High-Fidelity Multidisciplinary Design Optimization of Aircraft Configurations
NASA Technical Reports Server (NTRS)
Martins, Joaquim R. R. A.; Kenway, Gaetan K. W.; Burdette, David; Jonsson, Eirikur; Kennedy, Graeme J.
2017-01-01
To evaluate new airframe technologies we need design tools based on high-fidelity models that consider multidisciplinary interactions early in the design process. The overarching goal of this NRA is to develop tools that enable high-fidelity multidisciplinary design optimization of aircraft configurations, and to apply these tools to the design of high aspect ratio flexible wings. We develop a geometry engine that is capable of quickly generating conventional and unconventional aircraft configurations including the internal structure. This geometry engine features adjoint derivative computation for efficient gradient-based optimization. We also added overset capability to a computational fluid dynamics solver, complete with an adjoint implementation and semiautomatic mesh generation. We also developed an approach to constraining buffet and started the development of an approach for constraining utter. On the applications side, we developed a new common high-fidelity model for aeroelastic studies of high aspect ratio wings. We performed optimal design trade-o s between fuel burn and aircraft weight for metal, conventional composite, and carbon nanotube composite wings. We also assessed a continuous morphing trailing edge technology applied to high aspect ratio wings. This research resulted in the publication of 26 manuscripts so far, and the developed methodologies were used in two other NRAs. 1
Room temperature high-fidelity holonomic single-qubit gate on a solid-state spin.
Arroyo-Camejo, Silvia; Lazariev, Andrii; Hell, Stefan W; Balasubramanian, Gopalakrishnan
2014-09-12
At its most fundamental level, circuit-based quantum computation relies on the application of controlled phase shift operations on quantum registers. While these operations are generally compromised by noise and imperfections, quantum gates based on geometric phase shifts can provide intrinsically fault-tolerant quantum computing. Here we demonstrate the high-fidelity realization of a recently proposed fast (non-adiabatic) and universal (non-Abelian) holonomic single-qubit gate, using an individual solid-state spin qubit under ambient conditions. This fault-tolerant quantum gate provides an elegant means for achieving the fidelity threshold indispensable for implementing quantum error correction protocols. Since we employ a spin qubit associated with a nitrogen-vacancy colour centre in diamond, this system is based on integrable and scalable hardware exhibiting strong analogy to current silicon technology. This quantum gate realization is a promising step towards viable, fault-tolerant quantum computing under ambient conditions.
Cluster-state quantum computing enhanced by high-fidelity generalized measurements.
Biggerstaff, D N; Kaltenbaek, R; Hamel, D R; Weihs, G; Rudolph, T; Resch, K J
2009-12-11
We introduce and implement a technique to extend the quantum computational power of cluster states by replacing some projective measurements with generalized quantum measurements (POVMs). As an experimental demonstration we fully realize an arbitrary three-qubit cluster computation by implementing a tunable linear-optical POVM, as well as fast active feedforward, on a two-qubit photonic cluster state. Over 206 different computations, the average output fidelity is 0.9832+/-0.0002; furthermore the error contribution from our POVM device and feedforward is only of O(10(-3)), less than some recent thresholds for fault-tolerant cluster computing.
Morrison, Janet D; Becker, Heather; Stuifbergen, Alexa K
2017-12-01
Careful consideration of intervention fidelity is critical to establishing the validity and reliability of research findings, yet such reports are often lacking in the research literature. It is imperative that intervention fidelity be methodically evaluated and reported to promote the translation of effective interventions into sound evidence-based practice. The purpose of this article is to explore strategies used to promote intervention fidelity, incorporating examples from a multisite clinical trial, that illustrate the National Institutes of Health Behavior Change Consortium's 5 domains for recommended treatment practices: (1) study design, (2) facilitator training, (3) intervention delivery, (4) intervention receipt, and (5) intervention enactment. A multisite randomized clinical trial testing the efficacy of a computer-assisted cognitive rehabilitation intervention for adults with multiple sclerosis is used to illustrate strategies promoting intervention fidelity. Data derived from audiotapes of intervention classes, audits of computer exercises completed by participants, participant class attendance, and goal attainment scaling suggested relatively high fidelity to the intervention protocol. This study illustrates how to report intervention fidelity in the literature guided by best practice strategies, which may serve to promote fidelity monitoring and reporting in future studies.
Butler, Troy; Wildey, Timothy
2018-01-01
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Troy; Wildey, Timothy
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.
Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is themore » inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.« less
High Fidelity Simulations of Unsteady Flow through Turbopumps and Flowliners
NASA Technical Reports Server (NTRS)
Kiris, Cetin C.; Kwak, dochan; Chan, William; Housman, Jeff
2006-01-01
High fidelity computations were carried out to analyze the orbiter LH2 feedline flowliner. Computations were performed on the Columbia platform which is a 10,240-processor supercluster consisting of 20 Altix nodes with 512 processor each. Various computational models were used to characterize the unsteady flow features in the turbopump, including the orbiter Low-Pressure-Fuel-Turbopump (LPFTP) inducer, the orbiter manifold and a test article used to represent the manifold. Unsteady flow originating from the orbiter LPFTP inducer is one of the major contributors to the high frequency cyclic loading that results in high cycle fatigue damage to the gimbal flowliners just upstream of the LPFTP. The flow fields for the orbiter manifold and representative test article are computed and analyzed for similarities and differences. The incompressible Navier-Stokes flow solver INS3D, based on the artificial compressibility method, was used to compute the flow of liquid hydrogen in each test article.
Degrees of reality: airway anatomy of high-fidelity human patient simulators and airway trainers.
Schebesta, Karl; Hüpfl, Michael; Rössler, Bernhard; Ringl, Helmut; Müller, Michael P; Kimberger, Oliver
2012-06-01
Human patient simulators and airway training manikins are widely used to train airway management skills to medical professionals. Furthermore, these patient simulators are employed as standardized "patients" to evaluate airway devices. However, little is known about how realistic these patient simulators and airway-training manikins really are. This trial aimed to evaluate the upper airway anatomy of four high-fidelity patient simulators and two airway trainers in comparison with actual patients by means of radiographic measurements. The volume of the pharyngeal airspace was the primary outcome parameter. Computed tomography scans of 20 adult trauma patients without head or neck injuries were compared with computed tomography scans of four high-fidelity patient simulators and two airway trainers. By using 14 predefined distances, two cross-sectional areas and three volume parameters of the upper airway, the manikins' similarity to a human patient was assessed. The pharyngeal airspace of all manikins differed significantly from the patients' pharyngeal airspace. The HPS Human Patient Simulator (METI®, Sarasota, FL) was the most realistic high-fidelity patient simulator (6/19 [32%] of all parameters were within the 95% CI of human airway measurements). The airway anatomy of four high-fidelity patient simulators and two airway trainers does not reflect the upper airway anatomy of actual patients. This finding may impact airway training and confound comparative airway device studies.
High Fidelity Simulations for Unsteady Flow Through the Orbiter LH2 Feedline Flowliner
NASA Technical Reports Server (NTRS)
Kiris, Cetin C.; Kwak, Dochan; Chan, William; Housman, Jeffrey
2005-01-01
High fidelity computations were carried out to analyze the orbiter M2 feedline flowliner. Various computational models were used to characterize the unsteady flow features in the turbopump, including the orbiter Low-Pressure-Fuel-Turbopump (LPFTP) inducer, the orbiter manifold and a test article used to represent the manifold. Unsteady flow originating from the orbiter LPFTP inducer is one of the major contributors to the high frequency cyclic loading that results in high cycle fatigue damage to the gimbal flowliners just upstream of the LPFTP. The flow fields for the orbiter manifold and representative test article are computed and analyzed for similarities and differences. An incompressible Navier-Stokes flow solver INS3D, based on the artificial compressibility method, was used to compute the flow of liquid hydrogen in each test article.
NASA Astrophysics Data System (ADS)
Liu, Jun; Dong, Ping; Zhou, Jian; Cao, Zhuo-Liang
2017-05-01
A scheme for implementing the non-adiabatic holonomic quantum computation in decoherence-free subspaces is proposed with the interactions between a microcavity and quantum dots. A universal set of quantum gates can be constructed on the encoded logical qubits with high fidelities. The current scheme can suppress both local and collective noises, which is very important for achieving universal quantum computation. Discussions about the gate fidelities with the experimental parameters show that our schemes can be implemented in current experimental technology. Therefore, our scenario offers a method for universal and robust solid-state quantum computation.
Implementing a strand of a scalable fault-tolerant quantum computing fabric.
Chow, Jerry M; Gambetta, Jay M; Magesan, Easwar; Abraham, David W; Cross, Andrew W; Johnson, B R; Masluk, Nicholas A; Ryan, Colm A; Smolin, John A; Srinivasan, Srikanth J; Steffen, M
2014-06-24
With favourable error thresholds and requiring only nearest-neighbour interactions on a lattice, the surface code is an error-correcting code that has garnered considerable attention. At the heart of this code is the ability to perform a low-weight parity measurement of local code qubits. Here we demonstrate high-fidelity parity detection of two code qubits via measurement of a third syndrome qubit. With high-fidelity gates, we generate entanglement distributed across three superconducting qubits in a lattice where each code qubit is coupled to two bus resonators. Via high-fidelity measurement of the syndrome qubit, we deterministically entangle the code qubits in either an even or odd parity Bell state, conditioned on the syndrome qubit state. Finally, to fully characterize this parity readout, we develop a measurement tomography protocol. The lattice presented naturally extends to larger networks of qubits, outlining a path towards fault-tolerant quantum computing.
Development of Adaptive Model Refinement (AMoR) for Multiphysics and Multifidelity Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turinsky, Paul
This project investigated the development and utilization of Adaptive Model Refinement (AMoR) for nuclear systems simulation applications. AMoR refers to utilization of several models of physical phenomena which differ in prediction fidelity. If the highest fidelity model is judged to always provide or exceeded the desired fidelity, than if one can determine the difference in a Quantity of Interest (QoI) between the highest fidelity model and lower fidelity models, one could utilize the fidelity model that would just provide the magnitude of the QoI desired. Assuming lower fidelity models require less computational resources, in this manner computational efficiency can bemore » realized provided the QoI value can be accurately and efficiently evaluated. This work utilized Generalized Perturbation Theory (GPT) to evaluate the QoI, by convoluting the GPT solution with the residual of the highest fidelity model determined using the solution from lower fidelity models. Specifically, a reactor core neutronics problem and thermal-hydraulics problem were studied to develop and utilize AMoR. The highest fidelity neutronics model was based upon the 3D space-time, two-group, nodal diffusion equations as solved in the NESTLE computer code. Added to the NESTLE code was the ability to determine the time-dependent GPT neutron flux. The lower fidelity neutronics model was based upon the point kinetics equations along with utilization of a prolongation operator to determine the 3D space-time, two-group flux. The highest fidelity thermal-hydraulics model was based upon the space-time equations governing fluid flow in a closed channel around a heat generating fuel rod. The Homogenous Equilibrium Mixture (HEM) model was used for the fluid and Finite Difference Method was applied to both the coolant and fuel pin energy conservation equations. The lower fidelity thermal-hydraulic model was based upon the same equations as used for the highest fidelity model but now with coarse spatial meshing, corrected somewhat by employing effective fuel heat conduction values. The effectiveness of switching between the highest fidelity model and lower fidelity model as a function of time was assessed using the neutronics problem. Based upon work completed to date, one concludes that the time switching is effective in annealing out differences between the highest and lower fidelity solutions. The effectiveness of using a lower fidelity GPT solution, along with a prolongation operator, to estimate the QoI was also assessed. The utilization of a lower fidelity GPT solution was done in an attempt to avoid the high computational burden associated with solving for the highest fidelity GPT solution. Based upon work completed to date, one concludes that the lower fidelity adjoint solution is not sufficiently accurate with regard to estimating the QoI; however, a formulation has been revealed that may provide a path for addressing this shortcoming.« less
An Automatic Medium to High Fidelity Low-Thrust Global Trajectory Toolchain; EMTG-GMAT
NASA Technical Reports Server (NTRS)
Beeson, Ryne T.; Englander, Jacob A.; Hughes, Steven P.; Schadegg, Maximillian
2015-01-01
Solving the global optimization, low-thrust, multiple-flyby interplanetary trajectory problem with high-fidelity dynamical models requires an unreasonable amount of computational resources. A better approach, and one that is demonstrated in this paper, is a multi-step process whereby the solution of the aforementioned problem is solved at a lower-fidelity and this solution is used as an initial guess for a higher-fidelity solver. The framework presented in this work uses two tools developed by NASA Goddard Space Flight Center: the Evolutionary Mission Trajectory Generator (EMTG) and the General Mission Analysis Tool (GMAT). EMTG is a medium to medium-high fidelity low-thrust interplanetary global optimization solver, which now has the capability to automatically generate GMAT script files for seeding a high-fidelity solution using GMAT's local optimization capabilities. A discussion of the dynamical models as well as thruster and power modeling for both EMTG and GMAT are given in this paper. Current capabilities are demonstrated with examples that highlight the toolchains ability to efficiently solve the difficult low-thrust global optimization problem with little human intervention.
A comparison of select image-compression algorithms for an electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.
High-Fidelity Computational Aerodynamics of the Elytron 4S UAV
NASA Technical Reports Server (NTRS)
Ventura Diaz, Patricia; Yoon, Seokkwan; Theodore, Colin R.
2018-01-01
High-fidelity Computational Fluid Dynamics (CFD) have been carried out for the Elytron 4S Unmanned Aerial Vehicle (UAV), also known as the converticopter "proto12". It is the scaled wind tunnel model of the Elytron 4S, an Urban Air Mobility (UAM) concept, a tilt-wing, box-wing rotorcraft capable of Vertical Take-Off and Landing (VTOL). The three-dimensional unsteady Navier-Stokes equations are solved on overset grids employing high-order accurate schemes, dual-time stepping, and a hybrid turbulence model using NASA's CFD code OVERFLOW. The Elytron 4S UAV has been simulated in airplane mode and in helicopter mode.
A Comparative Study of High and Low Fidelity Fan Models for Turbofan Engine System Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1991-01-01
In this paper, a heterogeneous propulsion system simulation method is presented. The method is based on the formulation of a cycle model of a gas turbine engine. The model includes the nonlinear characteristics of the engine components via use of empirical data. The potential to simulate the entire engine operation on a computer without the aid of data is demonstrated by numerically generating "performance maps" for a fan component using two flow models of varying fidelity. The suitability of the fan models were evaluated by comparing the computed performance with experimental data. A discussion of the potential benefits and/or difficulties in connecting simulations solutions of differing fidelity is given.
Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.
2014-11-23
This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.
NASA Technical Reports Server (NTRS)
Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo
2015-01-01
Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.
Experimental fault-tolerant universal quantum gates with solid-state spins under ambient conditions
Rong, Xing; Geng, Jianpei; Shi, Fazhan; Liu, Ying; Xu, Kebiao; Ma, Wenchao; Kong, Fei; Jiang, Zhen; Wu, Yang; Du, Jiangfeng
2015-01-01
Quantum computation provides great speedup over its classical counterpart for certain problems. One of the key challenges for quantum computation is to realize precise control of the quantum system in the presence of noise. Control of the spin-qubits in solids with the accuracy required by fault-tolerant quantum computation under ambient conditions remains elusive. Here, we quantitatively characterize the source of noise during quantum gate operation and demonstrate strategies to suppress the effect of these. A universal set of logic gates in a nitrogen-vacancy centre in diamond are reported with an average single-qubit gate fidelity of 0.999952 and two-qubit gate fidelity of 0.992. These high control fidelities have been achieved at room temperature in naturally abundant 13C diamond via composite pulses and an optimized control method. PMID:26602456
High-Fidelity Design of Multimodal Restorative Interventions in Gulf War Illness
2017-10-01
Bockmayr A, Klarner H, Siebert H. Time series dependent analysis of unparametrized Thomas networks. IEEE/ACM Transactions on Computational Biology and...Award Number: W81XWH-15-1-0582 TITLE:High-Fidelity Design of Multimodal Restorative Interventions in Gulf War Illness PRINCIPAL INVESTIGATOR...not be construed as an official Department of the Army position, policy or decision unless so designated by other documentation. REPORT
Sauer, Juergen; Sonderegger, Andreas
2009-07-01
An empirical study examined the impact of prototype fidelity on user behaviour, subjective user evaluation and emotion. The independent factors of prototype fidelity (paper prototype, computer prototype, fully operational appliance) and aesthetics of design (high vs. moderate) were varied in a between-subjects design. The 60 participants of the experiment were asked to complete two typical tasks of mobile phone usage: sending a text message and suppressing a phone number. Both performance data and a number of subjective measures were recorded. The results suggested that task completion time may be overestimated when a computer prototype is being used. Furthermore, users appeared to compensate for deficiencies in aesthetic design by overrating the aesthetic qualities of reduced fidelity prototypes. Finally, user emotions were more positively affected by the operation of the more attractive mobile phone than by the less appealing one.
Quantum Computing Architectural Design
NASA Astrophysics Data System (ADS)
West, Jacob; Simms, Geoffrey; Gyure, Mark
2006-03-01
Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.
Nursing Simulation: A Review of the Past 40 Years
ERIC Educational Resources Information Center
Nehring, Wendy M.; Lashley, Felissa R.
2009-01-01
Simulation, in its many forms, has been a part of nursing education and practice for many years. The use of games, computer-assisted instruction, standardized patients, virtual reality, and low-fidelity to high-fidelity mannequins have appeared in the past 40 years, whereas anatomical models, partial task trainers, and role playing were used…
Optimization of a solid-state electron spin qubit using Gate Set Tomography
Dehollain, Juan P.; Muhonen, Juha T.; Blume-Kohout, Robin J.; ...
2016-10-13
Here, state of the art qubit systems are reaching the gate fidelities required for scalable quantum computation architectures. Further improvements in the fidelity of quantum gates demands characterization and benchmarking protocols that are efficient, reliable and extremely accurate. Ideally, a benchmarking protocol should also provide information on how to rectify residual errors. Gate Set Tomography (GST) is one such protocol designed to give detailed characterization of as-built qubits. We implemented GST on a high-fidelity electron-spin qubit confined by a single 31P atom in 28Si. The results reveal systematic errors that a randomized benchmarking analysis could measure but not identify, whereasmore » GST indicated the need for improved calibration of the length of the control pulses. After introducing this modification, we measured a new benchmark average gate fidelity of 99.942(8)%, an improvement on the previous value of 99.90(2)%. Furthermore, GST revealed high levels of non-Markovian noise in the system, which will need to be understood and addressed when the qubit is used within a fault-tolerant quantum computation scheme.« less
Stability-Constrained Aerodynamic Shape Optimization with Applications to Flying Wings
NASA Astrophysics Data System (ADS)
Mader, Charles Alexander
A set of techniques is developed that allows the incorporation of flight dynamics metrics as an additional discipline in a high-fidelity aerodynamic optimization. Specifically, techniques for including static stability constraints and handling qualities constraints in a high-fidelity aerodynamic optimization are demonstrated. These constraints are developed from stability derivative information calculated using high-fidelity computational fluid dynamics (CFD). Two techniques are explored for computing the stability derivatives from CFD. One technique uses an automatic differentiation adjoint technique (ADjoint) to efficiently and accurately compute a full set of static and dynamic stability derivatives from a single steady solution. The other technique uses a linear regression method to compute the stability derivatives from a quasi-unsteady time-spectral CFD solution, allowing for the computation of static, dynamic and transient stability derivatives. Based on the characteristics of the two methods, the time-spectral technique is selected for further development, incorporated into an optimization framework, and used to conduct stability-constrained aerodynamic optimization. This stability-constrained optimization framework is then used to conduct an optimization study of a flying wing configuration. This study shows that stability constraints have a significant impact on the optimal design of flying wings and that, while static stability constraints can often be satisfied by modifying the airfoil profiles of the wing, dynamic stability constraints can require a significant change in the planform of the aircraft in order for the constraints to be satisfied.
Compact Single Site Resolution Cold Atom Experiment for Adiabatic Quantum Computing
2016-02-03
goal of our scientific investigation is to demonstrate high fidelity and fast atom-atom entanglement between physically 1. REPORT DATE (DD-MM-YYYY) 4...of our scientific investigation is to demonstrate high fidelity and fast atom-atom entanglement between physically separated and optically addressed...Specifically, we will design and construct a set of compact single atom traps with integrated optics, suitable for heralded entanglement and loophole
Superconducting quantum circuits at the surface code threshold for fault tolerance.
Barends, R; Kelly, J; Megrant, A; Veitia, A; Sank, D; Jeffrey, E; White, T C; Mutus, J; Fowler, A G; Campbell, B; Chen, Y; Chen, Z; Chiaro, B; Dunsworth, A; Neill, C; O'Malley, P; Roushan, P; Vainsencher, A; Wenner, J; Korotkov, A N; Cleland, A N; Martinis, John M
2014-04-24
A quantum computer can solve hard problems, such as prime factoring, database searching and quantum simulation, at the cost of needing to protect fragile quantum states from error. Quantum error correction provides this protection by distributing a logical state among many physical quantum bits (qubits) by means of quantum entanglement. Superconductivity is a useful phenomenon in this regard, because it allows the construction of large quantum circuits and is compatible with microfabrication. For superconducting qubits, the surface code approach to quantum computing is a natural choice for error correction, because it uses only nearest-neighbour coupling and rapidly cycled entangling gates. The gate fidelity requirements are modest: the per-step fidelity threshold is only about 99 per cent. Here we demonstrate a universal set of logic gates in a superconducting multi-qubit processor, achieving an average single-qubit gate fidelity of 99.92 per cent and a two-qubit gate fidelity of up to 99.4 per cent. This places Josephson quantum computing at the fault-tolerance threshold for surface code error correction. Our quantum processor is a first step towards the surface code, using five qubits arranged in a linear array with nearest-neighbour coupling. As a further demonstration, we construct a five-qubit Greenberger-Horne-Zeilinger state using the complete circuit and full set of gates. The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits.
High-fidelity quantum gates on quantum-dot-confined electron spins in low-Q optical microcavities
NASA Astrophysics Data System (ADS)
Li, Tao; Gao, Jian-Cun; Deng, Fu-Guo; Long, Gui-Lu
2018-04-01
We propose some high-fidelity quantum circuits for quantum computing on electron spins of quantum dots (QD) embedded in low-Q optical microcavities, including the two-qubit controlled-NOT gate and the multiple-target-qubit controlled-NOT gate. The fidelities of both quantum gates can, in principle, be robust to imperfections involved in a practical input-output process of a single photon by converting the infidelity into a heralded error. Furthermore, the influence of two different decay channels is detailed. By decreasing the quality factor of the present microcavity, we can largely increase the efficiencies of these quantum gates while their high fidelities remain unaffected. This proposal also has another advantage regarding its experimental feasibility, in that both quantum gates can work faithfully even when the QD-cavity systems are non-identical, which is of particular importance in current semiconductor QD technology.
Advances in High-Fidelity Multi-Physics Simulation Techniques
2008-01-01
predictor - corrector method is used to advance the solution in time. 33 x (m) y (m ) 0 1 2 3.00001 0 1 2 3 4 5 40 x 50 Grid 3 Figure 17: Typical...Unclassified c . THIS PAGE Unclassified 17. LIMITATION OF ABSTRACT: SAR 18. NUMBER OF PAGES 60 Datta Gaitonde 19b. TELEPHONE...advanced parallel computing platforms. The motivation to develop high-fidelity algorithms derives from considerations in various areas of current
Vincent, Mary Anne; Sheriff, Susan; Mellott, Susan
2015-02-01
High-fidelity simulation has become a growing educational modality among institutions of higher learning ever since the Institute of Medicine recommended that it be used to improve patient safety in 2000. However, there is limited research on the effect of high-fidelity simulation on psychomotor clinical performance improvement of undergraduate nursing students being evaluated by experts using reliable and valid appraisal instruments. The purpose of this integrative review and meta-analysis is to explore what researchers have established about the impact of high-fidelity simulation on improving the psychomotor clinical performance of undergraduate nursing students. Only eight of the 1120 references met inclusion criteria. A meta-analysis using Hedges' g to compute the effect size and direction of impact yielded a range of -0.26 to +3.39. A positive effect was shown in seven of eight studies; however, there were five different research designs and six unique appraisal instruments used among these studies. More research is necessary to determine if high-fidelity simulation improves psychomotor clinical performance in undergraduate nursing students. Nursing programs from multiple sites having a standardized curriculum and using the same appraisal instruments with established reliability and validity are ideal for this work.
High-Fidelity Roadway Modeling and Simulation
NASA Technical Reports Server (NTRS)
Wang, Jie; Papelis, Yiannis; Shen, Yuzhong; Unal, Ozhan; Cetin, Mecit
2010-01-01
Roads are an essential feature in our daily lives. With the advances in computing technologies, 2D and 3D road models are employed in many applications, such as computer games and virtual environments. Traditional road models were generated by professional artists manually using modeling software tools such as Maya and 3ds Max. This approach requires both highly specialized and sophisticated skills and massive manual labor. Automatic road generation based on procedural modeling can create road models using specially designed computer algorithms or procedures, reducing the tedious manual editing needed for road modeling dramatically. But most existing procedural modeling methods for road generation put emphasis on the visual effects of the generated roads, not the geometrical and architectural fidelity. This limitation seriously restricts the applicability of the generated road models. To address this problem, this paper proposes a high-fidelity roadway generation method that takes into account road design principles practiced by civil engineering professionals, and as a result, the generated roads can support not only general applications such as games and simulations in which roads are used as 3D assets, but also demanding civil engineering applications, which requires accurate geometrical models of roads. The inputs to the proposed method include road specifications, civil engineering road design rules, terrain information, and surrounding environment. Then the proposed method generates in real time 3D roads that have both high visual and geometrical fidelities. This paper discusses in details the procedures that convert 2D roads specified in shape files into 3D roads and civil engineering road design principles. The proposed method can be used in many applications that have stringent requirements on high precision 3D models, such as driving simulations and road design prototyping. Preliminary results demonstrate the effectiveness of the proposed method.
Fast, high-fidelity readout of multiple qubits
NASA Astrophysics Data System (ADS)
Bronn, N. T.; Abdo, B.; Inoue, K.; Lekuch, S.; Córcoles, A. D.; Hertzberg, J. B.; Takita, M.; Bishop, L. S.; Gambetta, J. M.; Chow, J. M.
2017-05-01
Quantum computing requires a delicate balance between coupling quantum systems to external instruments for control and readout, while providing enough isolation from sources of decoherence. Circuit quantum electrodynamics has been a successful method for protecting superconducting qubits, while maintaining the ability to perform readout [1, 2]. Here, we discuss improvements to this method that allow for fast, high-fidelity readout. Specifically, the integration of a Purcell filter, which allows us to increase the resonator bandwidth for fast readout, the incorporation of a Josephson parametric converter, which enables us to perform high-fidelity readout by amplifying the readout signal while adding the minimum amount of noise required by quantum mechanics, and custom control electronics, which provide us with the capability of fast decision and control.
High-fidelity cluster state generation for ultracold atoms in an optical lattice.
Inaba, Kensuke; Tokunaga, Yuuki; Tamaki, Kiyoshi; Igeta, Kazuhiro; Yamashita, Makoto
2014-03-21
We propose a method for generating high-fidelity multipartite spin entanglement of ultracold atoms in an optical lattice in a short operation time with a scalable manner, which is suitable for measurement-based quantum computation. To perform the desired operations based on the perturbative spin-spin interactions, we propose to actively utilize the extra degrees of freedom (DOFs) usually neglected in the perturbative treatment but included in the Hubbard Hamiltonian of atoms, such as, (pseudo-)charge and orbital DOFs. Our method simultaneously achieves high fidelity, short operation time, and scalability by overcoming the following fundamental problem: enhancing the interaction strength for shortening the operation time breaks the perturbative condition of the interaction and inevitably induces unwanted correlations among the spin and extra DOFs.
GIS Data Based Automatic High-Fidelity 3D Road Network Modeling
NASA Technical Reports Server (NTRS)
Wang, Jie; Shen, Yuzhong
2011-01-01
3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks
NASA Astrophysics Data System (ADS)
Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro
2018-06-01
A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Viken, Sally A.; Carter, Melissa B.; Viken, Jeffrey K.; Derlaga, Joseph M.; Stoll, Alex M.
2017-01-01
A variety of tools, from fundamental to high order, have been used to better understand applications of distributed electric propulsion to aid the wing and propulsion system design of the Leading Edge Asynchronous Propulsion Technology (LEAPTech) project and the X-57 Maxwell airplane. Three high-fidelity, Navier-Stokes computational fluid dynamics codes used during the project with results presented here are FUN3D, STAR-CCM+, and OVERFLOW. These codes employ various turbulence models to predict fully turbulent and transitional flow. Results from these codes are compared for two distributed electric propulsion configurations: the wing tested at NASA Armstrong on the Hybrid-Electric Integrated Systems Testbed truck, and the wing designed for the X-57 Maxwell airplane. Results from these computational tools for the high-lift wing tested on the Hybrid-Electric Integrated Systems Testbed truck and the X-57 high-lift wing presented compare reasonably well. The goal of the X-57 wing and distributed electric propulsion system design achieving or exceeding the required ?? (sub L) = 3.95 for stall speed was confirmed with all of the computational codes.
Highway Traffic Simulations on Multi-Processor Computers
DOT National Transportation Integrated Search
1997-01-01
A computer model has been developed to simulate highway traffic for various degrees of automation with a high degree of fidelity in regard to driver control and vehicle characteristics. The model simulates vehicle maneuvering in a multi-lane highway ...
State resolved vibrational relaxation modeling for strongly nonequilibrium flows
NASA Astrophysics Data System (ADS)
Boyd, Iain D.; Josyula, Eswar
2011-05-01
Vibrational relaxation is an important physical process in hypersonic flows. Activation of the vibrational mode affects the fundamental thermodynamic properties and finite rate relaxation can reduce the degree of dissociation of a gas. Low fidelity models of vibrational activation employ a relaxation time to capture the process at a macroscopic level. High fidelity, state-resolved models have been developed for use in continuum gas dynamics simulations based on computational fluid dynamics (CFD). By comparison, such models are not as common for use with the direct simulation Monte Carlo (DSMC) method. In this study, a high fidelity, state-resolved vibrational relaxation model is developed for the DSMC technique. The model is based on the forced harmonic oscillator approach in which multi-quantum transitions may become dominant at high temperature. Results obtained for integrated rate coefficients from the DSMC model are consistent with the corresponding CFD model. Comparison of relaxation results obtained with the high-fidelity DSMC model shows significantly less excitation of upper vibrational levels in comparison to the standard, lower fidelity DSMC vibrational relaxation model. Application of the new DSMC model to a Mach 7 normal shock wave in carbon monoxide provides better agreement with experimental measurements than the standard DSMC relaxation model.
Spin qubit transport in a double quantum dot
NASA Astrophysics Data System (ADS)
Zhao, Xinyu; Hu, Xuedong
Long distance spin communication is a crucial ingredient to scalable quantum computer architectures based on electron spin qubits. One way to transfer spin information over a long distance on chip is via electron transport. Here we study the transport of an electron spin qubit in a double quantum dot by tuning the interdot detuning voltage. We identify a parameter regime where spin relaxation hot-spots can be avoided and high-fidelity spin transport is possible. Within this parameter space, the spin transfer fidelity is determined by the operation speed and the applied magnetic field. In particular, near zero detuning, a proper choice of operation speed is essential to high fidelity. In addition, we also investigate the modification of the effective g-factor by the interdot detuning, which could lead to a phase error between spin up and down states. The results presented in this work could be a useful guidance for experimentally achieving high-fidelity spin qubit transport. We thank financial support by US ARO via Grant W911NF1210609.
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
Role of HPC in Advancing Computational Aeroelasticity
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.
2004-01-01
On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.
Performance evaluation of objective quality metrics for HDR image compression
NASA Astrophysics Data System (ADS)
Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic
2014-09-01
Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.
Testing the Relation between Fidelity of Implementation and Student Outcomes in Math
ERIC Educational Resources Information Center
Crawford, Lindy; Carpenter, Dick M., II; Wilson, Mary T.; Schmeister, Megan; McDonald, Marilee
2012-01-01
The relation between fidelity of implementation and student outcomes in a computer-based middle school mathematics curriculum was measured empirically. Participants included 485 students and 23 teachers from 11 public middle schools across seven states. Implementation fidelity was defined using two constructs: fidelity to structure and fidelity to…
High-Fidelity Single-Shot Toffoli Gate via Quantum Control.
Zahedinejad, Ehsan; Ghosh, Joydip; Sanders, Barry C
2015-05-22
A single-shot Toffoli, or controlled-controlled-not, gate is desirable for classical and quantum information processing. The Toffoli gate alone is universal for reversible computing and, accompanied by the Hadamard gate, forms a universal gate set for quantum computing. The Toffoli gate is also a key ingredient for (nontopological) quantum error correction. Currently Toffoli gates are achieved by decomposing into sequentially implemented single- and two-qubit gates, which require much longer times and yields lower overall fidelities compared to a single-shot implementation. We develop a quantum-control procedure to construct a single-shot Toffoli gate for three nearest-neighbor-coupled superconducting transmon systems such that the fidelity is 99.9% and is as fast as an entangling two-qubit gate under the same realistic conditions. The gate is achieved by a nongreedy quantum control procedure using our enhanced version of the differential evolution algorithm.
Band-selective shaped pulse for high fidelity quantum control in diamond
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Yan-Chun; Xing, Jian; Liu, Gang-Qin
High fidelity quantum control of qubits is crucially important for realistic quantum computing, and it becomes more challenging when there are inevitable interactions between qubits. We introduce a band-selective shaped pulse, refocusing BURP (REBURP) pulse, to cope with the problems. The electron spin of nitrogen-vacancy centers in diamond is flipped with high fidelity by the REBURP pulse. In contrast with traditional rectangular pulses, the shaped pulse has almost equal excitation effect in a sharply edged region (in frequency domain). So the three sublevels of host {sup 14}N nuclear spin can be flipped accurately simultaneously, while unwanted excitations of other sublevelsmore » (e.g., of a nearby {sup 13}C nuclear spin) is well suppressed. Our scheme can be used for various applications such as quantum metrology, quantum sensing, and quantum information process.« less
NASA Technical Reports Server (NTRS)
Drozda, Tomasz G.; Quinlan, Jesse R.; Pisciuneri, Patrick H.; Yilmaz, S. Levent
2012-01-01
Significant progress has been made in the development of subgrid scale (SGS) closures based on a filtered density function (FDF) for large eddy simulations (LES) of turbulent reacting flows. The FDF is the counterpart of the probability density function (PDF) method, which has proven effective in Reynolds averaged simulations (RAS). However, while systematic progress is being made advancing the FDF models for relatively simple flows and lab-scale flames, the application of these methods in complex geometries and high speed, wall-bounded flows with shocks remains a challenge. The key difficulties are the significant computational cost associated with solving the FDF transport equation and numerically stiff finite rate chemistry. For LES/FDF methods to make a more significant impact in practical applications a pragmatic approach must be taken that significantly reduces the computational cost while maintaining high modeling fidelity. An example of one such ongoing effort is at the NASA Langley Research Center, where the first generation FDF models, namely the scalar filtered mass density function (SFMDF) are being implemented into VULCAN, a production-quality RAS and LES solver widely used for design of high speed propulsion flowpaths. This effort leverages internal and external collaborations to reduce the overall computational cost of high fidelity simulations in VULCAN by: implementing high order methods that allow reduction in the total number of computational cells without loss in accuracy; implementing first generation of high fidelity scalar PDF/FDF models applicable to high-speed compressible flows; coupling RAS/PDF and LES/FDF into a hybrid framework to efficiently and accurately model the effects of combustion in the vicinity of the walls; developing efficient Lagrangian particle tracking algorithms to support robust solutions of the FDF equations for high speed flows; and utilizing finite rate chemistry parametrization, such as flamelet models, to reduce the number of transported reactive species and remove numerical stiffness. This paper briefly introduces the SFMDF model (highlighting key benefits and challenges), and discusses particle tracking for flows with shocks, the hybrid coupled RAS/PDF and LES/FDF model, flamelet generated manifolds (FGM) model, and the Irregularly Portioned Lagrangian Monte Carlo Finite Difference (IPLMCFD) methodology for scalable simulation of high-speed reacting compressible flows.
High fidelity simulations of infrared imagery with animated characters
NASA Astrophysics Data System (ADS)
Näsström, F.; Persson, A.; Bergström, D.; Berggren, J.; Hedström, J.; Allvar, J.; Karlsson, M.
2012-06-01
High fidelity simulations of IR signatures and imagery tend to be slow and do not have effective support for animation of characters. Simplified rendering methods based on computer graphics methods can be used to overcome these limitations. This paper presents a method to combine these tools and produce simulated high fidelity thermal IR data of animated people in terrain. Infrared signatures for human characters have been calculated using RadThermIR. To handle multiple character models, these calculations use a simplified material model for the anatomy and clothing. Weather and temperature conditions match the IR-texture used in the terrain model. The calculated signatures are applied to the animated 3D characters that, together with the terrain model, are used to produce high fidelity IR imagery of people or crowds. For high level animation control and crowd simulations, HLAS (High Level Animation System) has been developed. There are tools available to create and visualize skeleton based animations, but tools that allow control of the animated characters on a higher level, e.g. for crowd simulation, are usually expensive and closed source. We need the flexibility of HLAS to add animation into an HLA enabled sensor system simulation framework.
A laboratory breadboard system for dual-arm teleoperation
NASA Technical Reports Server (NTRS)
Bejczy, A. K.; Szakaly, Z.; Kim, W. S.
1990-01-01
The computing architecture of a novel dual-arm teleoperation system is described. The novelty of this system is that: (1) the master arm is not a replica of the slave arm; it is unspecific to any manipulator and can be used for the control of various robot arms with software modifications; and (2) the force feedback to the general purpose master arm is derived from force-torque sensor data originating from the slave hand. The computing architecture of this breadboard system is a fully synchronized pipeline with unique methods for data handling, communication and mathematical transformations. The computing system is modular, thus inherently extendable. The local control loops at both sites operate at 100 Hz rate, and the end-to-end bilateral (force-reflecting) control loop operates at 200 Hz rate, each loop without interpolation. This provides high-fidelity control. This end-to-end system elevates teleoperation to a new level of capabilities via the use of sensors, microprocessors, novel electronics, and real-time graphics displays. A description is given of a graphic simulation system connected to the dual-arm teleoperation breadboard system. High-fidelity graphic simulation of a telerobot (called Phantom Robot) is used for preview and predictive displays for planning and for real-time control under several seconds communication time delay conditions. High fidelity graphic simulation is obtained by using appropriate calibration techniques.
Costello, John P; Olivieri, Laura J; Krieger, Axel; Thabit, Omar; Marshall, M Blair; Yoo, Shi-Joon; Kim, Peter C; Jonas, Richard A; Nath, Dilip S
2014-07-01
The current educational approach for teaching congenital heart disease (CHD) anatomy to students involves instructional tools and techniques that have significant limitations. This study sought to assess the feasibility of utilizing present-day three-dimensional (3D) printing technology to create high-fidelity synthetic heart models with ventricular septal defect (VSD) lesions and applying these models to a novel, simulation-based educational curriculum for premedical and medical students. Archived, de-identified magnetic resonance images of five common VSD subtypes were obtained. These cardiac images were then segmented and built into 3D computer-aided design models using Mimics Innovation Suite software. An Objet500 Connex 3D printer was subsequently utilized to print a high-fidelity heart model for each VSD subtype. Next, a simulation-based educational curriculum using these heart models was developed and implemented in the instruction of 29 premedical and medical students. Assessment of this curriculum was undertaken with Likert-type questionnaires. High-fidelity VSD models were successfully created utilizing magnetic resonance imaging data and 3D printing. Following instruction with these high-fidelity models, all students reported significant improvement in knowledge acquisition (P < .0001), knowledge reporting (P < .0001), and structural conceptualization (P < .0001) of VSDs. It is feasible to use present-day 3D printing technology to create high-fidelity heart models with complex intracardiac defects. Furthermore, this tool forms the foundation for an innovative, simulation-based educational approach to teach students about CHD and creates a novel opportunity to stimulate their interest in this field. © The Author(s) 2014.
Video Conferencing: The Next Wave for International Business Communication.
ERIC Educational Resources Information Center
Sondak, Norman E.; Sondak, Eileen M.
This paper suggests that desktop computer-based video conferencing, with high fidelity sound, and group software support, is emerging as a major communications option. Briefly addressed are the following critical factors that are propelling the computer-based video conferencing revolution: (1) widespread availability of desktop computers…
High-Fidelity Quantum Logic Gates Using Trapped-Ion Hyperfine Qubits.
Ballance, C J; Harty, T P; Linke, N M; Sepiol, M A; Lucas, D M
2016-08-05
We demonstrate laser-driven two-qubit and single-qubit logic gates with respective fidelities 99.9(1)% and 99.9934(3)%, significantly above the ≈99% minimum threshold level required for fault-tolerant quantum computation, using qubits stored in hyperfine ground states of calcium-43 ions held in a room-temperature trap. We study the speed-fidelity trade-off for the two-qubit gate, for gate times between 3.8 μs and 520 μs, and develop a theoretical error model which is consistent with the data and which allows us to identify the principal technical sources of infidelity.
High-fidelity spin measurement on the nitrogen-vacancy center
NASA Astrophysics Data System (ADS)
Hanks, Michael; Trupke, Michael; Schmiedmayer, Jörg; Munro, William J.; Nemoto, Kae
2017-10-01
Nitrogen-vacancy (NV) centers in diamond are versatile candidates for many quantum information processing tasks, ranging from quantum imaging and sensing through to quantum communication and fault-tolerant quantum computers. Critical to almost every potential application is an efficient mechanism for the high fidelity readout of the state of the electronic and nuclear spins. Typically such readout has been achieved through an optically resonant fluorescence measurement, but the presence of decay through a meta-stable state will limit its efficiency to the order of 99%. While this is good enough for many applications, it is insufficient for large scale quantum networks and fault-tolerant computational tasks. Here we explore an alternative approach based on dipole induced transparency (state-dependent reflection) in an NV center cavity QED system, using the most recent knowledge of the NV center’s parameters to determine its feasibility, including the decay channels through the meta-stable subspace and photon ionization. We find that single-shot measurements above fault-tolerant thresholds should be available in the strong coupling regime for a wide range of cavity-center cooperativities, using a majority voting approach utilizing single photon detection. Furthermore, extremely high fidelity measurements are possible using weak optical pulses.
A Multi-Level Parallelization Concept for High-Fidelity Multi-Block Solvers
NASA Technical Reports Server (NTRS)
Hatay, Ferhat F.; Jespersen, Dennis C.; Guruswamy, Guru P.; Rizk, Yehia M.; Byun, Chansup; Gee, Ken; VanDalsem, William R. (Technical Monitor)
1997-01-01
The integration of high-fidelity Computational Fluid Dynamics (CFD) analysis tools with the industrial design process benefits greatly from the robust implementations that are transportable across a wide range of computer architectures. In the present work, a hybrid domain-decomposition and parallelization concept was developed and implemented into the widely-used NASA multi-block Computational Fluid Dynamics (CFD) packages implemented in ENSAERO and OVERFLOW. The new parallel solver concept, PENS (Parallel Euler Navier-Stokes Solver), employs both fine and coarse granularity in data partitioning as well as data coalescing to obtain the desired load-balance characteristics on the available computer platforms. This multi-level parallelism implementation itself introduces no changes to the numerical results, hence the original fidelity of the packages are identically preserved. The present implementation uses the Message Passing Interface (MPI) library for interprocessor message passing and memory accessing. By choosing an appropriate combination of the available partitioning and coalescing capabilities only during the execution stage, the PENS solver becomes adaptable to different computer architectures from shared-memory to distributed-memory platforms with varying degrees of parallelism. The PENS implementation on the IBM SP2 distributed memory environment at the NASA Ames Research Center obtains 85 percent scalable parallel performance using fine-grain partitioning of single-block CFD domains using up to 128 wide computational nodes. Multi-block CFD simulations of complete aircraft simulations achieve 75 percent perfect load-balanced executions using data coalescing and the two levels of parallelism. SGI PowerChallenge, SGI Origin 2000, and a cluster of workstations are the other platforms where the robustness of the implementation is tested. The performance behavior on the other computer platforms with a variety of realistic problems will be included as this on-going study progresses.
High-Fidelity Preservation of Quantum Information During Trapped-Ion Transport
NASA Astrophysics Data System (ADS)
Kaufmann, Peter; Gloger, Timm F.; Kaufmann, Delia; Johanning, Michael; Wunderlich, Christof
2018-01-01
A promising scheme for building scalable quantum simulators and computers is the synthesis of a scalable system using interconnected subsystems. A prerequisite for this approach is the ability to faithfully transfer quantum information between subsystems. With trapped atomic ions, this can be realized by transporting ions with quantum information encoded into their internal states. Here, we measure with high precision the fidelity of quantum information encoded into hyperfine states of a
Optimal control of fast and high-fidelity quantum state transfer in spin-1/2 chains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiong-Peng; Shao, Bin, E-mail: sbin610@bit.edu.cn; Hu, Shuai
Spin chains are promising candidates for quantum communication and computation. Using quantum optimal control (OC) theory based on the Krotov method, we present a protocol to perform quantum state transfer with fast and high fidelity by only manipulating the boundary spins in a quantum spin-1/2 chain. The achieved speed is about one order of magnitude faster than that is possible in the Lyapunov control case for comparable fidelities. Additionally, it has a fundamental limit for OC beyond which optimization is not possible. The controls are exerted only on the couplings between the boundary spins and their neighbors, so that themore » scheme has good scalability. We also demonstrate that the resulting OC scheme is robust against disorder in the chain.« less
Bowman, D; Harte, T L; Chardonnet, V; De Groot, C; Denny, S J; Le Goc, G; Anderson, M; Ireland, P; Cassettari, D; Bruce, G D
2017-05-15
We demonstrate simultaneous control of both the phase and amplitude of light using a conjugate gradient minimisation-based hologram calculation technique and a single phase-only spatial light modulator (SLM). A cost function, which incorporates the inner product of the light field with a chosen target field within a defined measure region, is efficiently minimised to create high fidelity patterns in the Fourier plane of the SLM. A fidelity of F = 0.999997 is achieved for a pattern resembling an LG10 mode with a calculated light-usage efficiency of 41.5%. Possible applications of our method in optical trapping and ultracold atoms are presented and we show uncorrected experimental realisation of our patterns with F = 0.97 and 7.8% light efficiency.
A Novel Shape Parameterization Approach
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in the same manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminate plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling) analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutland, Christopher J.
2009-04-26
The Terascale High-Fidelity Simulations of Turbulent Combustion (TSTC) project is a multi-university collaborative effort to develop a high-fidelity turbulent reacting flow simulation capability utilizing terascale, massively parallel computer technology. The main paradigm of the approach is direct numerical simulation (DNS) featuring the highest temporal and spatial accuracy, allowing quantitative observations of the fine-scale physics found in turbulent reacting flows as well as providing a useful tool for development of sub-models needed in device-level simulations. Under this component of the TSTC program the simulation code named S3D, developed and shared with coworkers at Sandia National Laboratories, has been enhanced with newmore » numerical algorithms and physical models to provide predictive capabilities for turbulent liquid fuel spray dynamics. Major accomplishments include improved fundamental understanding of mixing and auto-ignition in multi-phase turbulent reactant mixtures and turbulent fuel injection spray jets.« less
Zaari, Ryan R; Brown, Alex
2011-07-28
The importance of the ro-vibrational state energies on the ability to produce high fidelity binary shaped laser pulses for quantum logic gates is investigated. The single frequency 2-qubit ACNOT(1) and double frequency 2-qubit NOT(2) quantum gates are used as test cases to examine this behaviour. A range of diatomics is sampled. The laser pulses are optimized using a genetic algorithm for binary (two amplitude and two phase parameter) variation on a discretized frequency spectrum. The resulting trends in the fidelities were attributed to the intrinsic molecular properties and not the choice of method: a discretized frequency spectrum with genetic algorithm optimization. This is verified by using other common laser pulse optimization methods (including iterative optimal control theory), which result in the same qualitative trends in fidelity. The results differ from other studies that used vibrational state energies only. Moreover, appropriate choice of diatomic (relative ro-vibrational state arrangement) is critical for producing high fidelity optimized quantum logic gates. It is also suggested that global phase alignment imposes a significant restriction on obtaining high fidelity regions within the parameter search space. Overall, this indicates a complexity in the ability to provide appropriate binary laser pulse control of diatomics for molecular quantum computing. © 2011 American Institute of Physics
Steigerwald, Sarah N.; Park, Jason; Hardy, Krista M.; Gillman, Lawrence; Vergis, Ashley S.
2015-01-01
Background Considerable resources have been invested in both low- and high-fidelity simulators in surgical training. The purpose of this study was to investigate if the Fundamentals of Laparoscopic Surgery (FLS, low-fidelity box trainer) and LapVR (high-fidelity virtual reality) training systems correlate with operative performance on the Global Operative Assessment of Laparoscopic Skills (GOALS) global rating scale using a porcine cholecystectomy model in a novice surgical group with minimal laparoscopic experience. Methods Fourteen postgraduate year 1 surgical residents with minimal laparoscopic experience performed tasks from the FLS program and the LapVR simulator as well as a live porcine laparoscopic cholecystectomy. Performance was evaluated using standardized FLS metrics, automatic computer evaluations, and a validated global rating scale. Results Overall, FLS score did not show an association with GOALS global rating scale score on the porcine cholecystectomy. None of the five LapVR task scores were significantly associated with GOALS score on the porcine cholecystectomy. Conclusions Neither the low-fidelity box trainer or the high-fidelity virtual simulator demonstrated significant correlation with GOALS operative scores. These findings offer caution against the use of these modalities for brief assessments of novice surgical trainees, especially for predictive or selection purposes. PMID:26641071
Unbiased multi-fidelity estimate of failure probability of a free plane jet
NASA Astrophysics Data System (ADS)
Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin
2017-11-01
Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.
A Transfer of Training Study of Control Loader Dynamics
NASA Technical Reports Server (NTRS)
Cardullo, Frank M.; Stanco, Anthony A.; Kelly, Lon C.; Houck, Jacob A.; Grube, Richard C.
2011-01-01
The control inceptor used in a simulated vehicle is an important part in maintaining the fidelity of a simulation. The force feedback provided by the control inceptor gives the operator important cues to maintain adequate performance. The dynamics of a control inceptor are typically based on a second order spring mass damper system with damping, force gradient, breakout force, and natural frequency parameters. Changing these parameters can have a great effect on pilot or driver control of the vehicle. The neuromuscular system has a very important role in manipulating the control inceptor within a vehicle. Many studies by McRuer, Aponso, and Hess have dealt with modeling the neuromuscular system and quantifying the effects of a high fidelity control loader as compared to a low fidelity control loader. Humans are adaptive in nature and their control behavior changes based on different control loader dynamics. Humans will change their control behavior to maintain tracking bandwidth and minimize tracking error. This paper reports on a quasi-transfer of training experiment which was performed at the NASA Langley Research Center. The quasi transfer of training study used a high fidelity control loader and a low fidelity control loader. Subjects trained in both simulations and then were transferred to the high fidelity control loader simulation. The parameters for the high fidelity control loader were determined from the literature. The low fidelity control loader parameters were found through testing of a simple computer joystick. A disturbance compensatory task is employed. The compensatory task involves implementing a simple horizon out the window display. A disturbance consisting of a sum of sines is used. The task consists of the subject compensating for the disturbance on the roll angle of the aircraft. The vehicle dynamics are represented as 1/s and 1/s2. The subject will try to maintain level flight throughout the experiment. The subjects consist of non-pilots to remove any effects of pilot experience. First, this paper discusses the implementation of the disturbance compensation task. Second, the high and low fidelity parameters used within the experiment are presented. Finally, an explanation of results from the experiments is presented.
NASA Astrophysics Data System (ADS)
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
Electromechanical quantum simulators
NASA Astrophysics Data System (ADS)
Tacchino, F.; Chiesa, A.; LaHaye, M. D.; Carretta, S.; Gerace, D.
2018-06-01
Digital quantum simulators are among the most appealing applications of a quantum computer. Here we propose a universal, scalable, and integrated quantum computing platform based on tunable nonlinear electromechanical nano-oscillators. It is shown that very high operational fidelities for single- and two-qubits gates can be achieved in a minimal architecture, where qubits are encoded in the anharmonic vibrational modes of mechanical nanoresonators, whose effective coupling is mediated by virtual fluctuations of an intermediate superconducting artificial atom. An effective scheme to induce large single-phonon nonlinearities in nanoelectromechanical devices is explicitly discussed, thus opening the route to experimental investigation in this direction. Finally, we explicitly show the very high fidelities that can be reached for the digital quantum simulation of model Hamiltonians, by using realistic experimental parameters in state-of-the-art devices, and considering the transverse field Ising model as a paradigmatic example.
Wind Farm Flow Modeling using an Input-Output Reduced-Order Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter
Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used tomore » extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.« less
Pouthier, Vincent
2012-11-07
A communication protocol is proposed in which vibron-mediated quantum state transfer takes place in a molecular lattice. We consider two distant molecular groups grafted on each side of the lattice. These groups form two quantum computers where vibrational qubits are implemented and received. The lattice defines the communication channel along which a vibron delocalizes and interacts with a phonon bath. Using quasi-degenerate perturbation theory, vibron-phonon entanglement is taken into account through the effective Hamiltonian concept. A vibron is thus dressed by a virtual phonon cloud whereas a phonon is clothed by virtual vibronic transitions. It is shown that three quasi-degenerate dressed states define the relevant paths followed by a vibron to tunnel between the computers. When the coupling between the computers and the lattice is judiciously chosen, constructive interference takes place between these paths. Phonon-induced decoherence is minimized and a high-fidelity quantum state transfer occurs over a broad temperature range.
Point-of-care ultrasound education: the increasing role of simulation and multimedia resources.
Lewiss, Resa E; Hoffmann, Beatrice; Beaulieu, Yanick; Phelan, Mary Beth
2014-01-01
This article reviews the current technology, literature, teaching models, and methods associated with simulation-based point-of-care ultrasound training. Patient simulation appears particularly well suited for learning point-of-care ultrasound, which is a required core competency for emergency medicine and other specialties. Work hour limitations have reduced the opportunities for clinical practice, and simulation enables practicing a skill multiple times before it may be used on patients. Ultrasound simulators can be categorized into 2 groups: low and high fidelity. Low-fidelity simulators are usually static simulators, meaning that they have nonchanging anatomic examples for sonographic practice. Advantages are that the model may be reused over time, and some simulators can be homemade. High-fidelity simulators are usually high-tech and frequently consist of many computer-generated cases of virtual sonographic anatomy that can be scanned with a mock probe. This type of equipment is produced commercially and is more expensive. High-fidelity simulators provide students with an active and safe learning environment and make a reproducible standardized assessment of many different ultrasound cases possible. The advantages and disadvantages of using low- versus high-fidelity simulators are reviewed. An additional concept used in simulation-based ultrasound training is blended learning. Blended learning may include face-to-face or online learning often in combination with a learning management system. Increasingly, with simulation and Web-based learning technologies, tools are now available to medical educators for the standardization of both ultrasound skills training and competency assessment.
Simulations of High Speed Fragment Trajectories
NASA Astrophysics Data System (ADS)
Yeh, Peter; Attaway, Stephen; Arunajatesan, Srinivasan; Fisher, Travis
2017-11-01
Flying shrapnel from an explosion are capable of traveling at supersonic speeds and distances much farther than expected due to aerodynamic interactions. Predicting the trajectories and stable tumbling modes of arbitrary shaped fragments is a fundamental problem applicable to range safety calculations, damage assessment, and military technology. Traditional approaches rely on characterizing fragment flight using a single drag coefficient, which may be inaccurate for fragments with large aspect ratios. In our work we develop a procedure to simulate trajectories of arbitrary shaped fragments with higher fidelity using high performance computing. We employ a two-step approach in which the force and moment coefficients are first computed as a function of orientation using compressible computational fluid dynamics. The force and moment data are then input into a six-degree-of-freedom rigid body dynamics solver to integrate trajectories in time. Results of these high fidelity simulations allow us to further understand the flight dynamics and tumbling modes of a single fragment. Furthermore, we use these results to determine the validity and uncertainty of inexpensive methods such as the single drag coefficient model.
Quantum logic between remote quantum registers
NASA Astrophysics Data System (ADS)
Yao, N. Y.; Gong, Z.-X.; Laumann, C. R.; Bennett, S. D.; Duan, L.-M.; Lukin, M. D.; Jiang, L.; Gorshkov, A. V.
2013-02-01
We consider two approaches to dark-spin-mediated quantum computing in hybrid solid-state spin architectures. First, we review the notion of eigenmode-mediated unpolarized spin-chain state transfer and extend the analysis to various experimentally relevant imperfections: quenched disorder, dynamical decoherence, and uncompensated long-range coupling. In finite-length chains, the interplay between disorder-induced localization and decoherence yields a natural optimal channel fidelity, which we calculate. Long-range dipolar couplings induce a finite intrinsic lifetime for the mediating eigenmode; extensive numerical simulations of dipolar chains of lengths up to L=12 show remarkably high fidelity despite these decay processes. We further briefly consider the extension of the protocol to bosonic systems of coupled oscillators. Second, we introduce a quantum mirror based architecture for universal quantum computing that exploits all of the dark spins in the system as potential qubits. While this dramatically increases the number of qubits available, the composite operations required to manipulate dark-spin qubits significantly raise the error threshold for robust operation. Finally, we demonstrate that eigenmode-mediated state transfer can enable robust long-range logic between spatially separated nitrogen-vacancy registers in diamond; disorder-averaged numerics confirm that high-fidelity gates are achievable even in the presence of moderate disorder.
A two-qubit logic gate in silicon.
Veldhorst, M; Yang, C H; Hwang, J C C; Huang, W; Dehollain, J P; Muhonen, J T; Simmons, S; Laucht, A; Hudson, F E; Itoh, K M; Morello, A; Dzurak, A S
2015-10-15
Quantum computation requires qubits that can be coupled in a scalable manner, together with universal and high-fidelity one- and two-qubit logic gates. Many physical realizations of qubits exist, including single photons, trapped ions, superconducting circuits, single defects or atoms in diamond and silicon, and semiconductor quantum dots, with single-qubit fidelities that exceed the stringent thresholds required for fault-tolerant quantum computing. Despite this, high-fidelity two-qubit gates in the solid state that can be manufactured using standard lithographic techniques have so far been limited to superconducting qubits, owing to the difficulties of coupling qubits and dephasing in semiconductor systems. Here we present a two-qubit logic gate, which uses single spins in isotopically enriched silicon and is realized by performing single- and two-qubit operations in a quantum dot system using the exchange interaction, as envisaged in the Loss-DiVincenzo proposal. We realize CNOT gates via controlled-phase operations combined with single-qubit operations. Direct gate-voltage control provides single-qubit addressability, together with a switchable exchange interaction that is used in the two-qubit controlled-phase gate. By independently reading out both qubits, we measure clear anticorrelations in the two-spin probabilities of the CNOT gate.
NASA Technical Reports Server (NTRS)
Radespiel, Rolf; Hemsch, Michael J.
2007-01-01
The complexity of modern military systems, as well as the cost and difficulty associated with experimentally verifying system and subsystem design makes the use of high-fidelity based simulation a future alternative for design and development. The predictive ability of such simulations such as computational fluid dynamics (CFD) and computational structural mechanics (CSM) have matured significantly. However, for numerical simulations to be used with confidence in design and development, quantitative measures of uncertainty must be available. The AVT 147 Symposium has been established to compile state-of-the art methods of assessing computational uncertainty, to identify future research and development needs associated with these methods, and to present examples of how these needs are being addressed and how the methods are being applied. Papers were solicited that address uncertainty estimation associated with high fidelity, physics-based simulations. The solicitation included papers that identify sources of error and uncertainty in numerical simulation from either the industry perspective or from the disciplinary or cross-disciplinary research perspective. Examples of the industry perspective were to include how computational uncertainty methods are used to reduce system risk in various stages of design or development.
Demonstration of universal parametric entangling gates on a multi-qubit lattice
Reagor, Matthew; Osborn, Christopher B.; Tezak, Nikolas; Staley, Alexa; Prawiroatmodjo, Guenevere; Scheer, Michael; Alidoust, Nasser; Sete, Eyob A.; Didier, Nicolas; da Silva, Marcus P.; Acala, Ezer; Angeles, Joel; Bestwick, Andrew; Block, Maxwell; Bloom, Benjamin; Bradley, Adam; Bui, Catvu; Caldwell, Shane; Capelluto, Lauren; Chilcott, Rick; Cordova, Jeff; Crossman, Genya; Curtis, Michael; Deshpande, Saniya; El Bouayadi, Tristan; Girshovich, Daniel; Hong, Sabrina; Hudson, Alex; Karalekas, Peter; Kuang, Kat; Lenihan, Michael; Manenti, Riccardo; Manning, Thomas; Marshall, Jayss; Mohan, Yuvraj; O’Brien, William; Otterbach, Johannes; Papageorge, Alexander; Paquette, Jean-Philip; Pelstring, Michael; Polloreno, Anthony; Rawat, Vijay; Ryan, Colm A.; Renzas, Russ; Rubin, Nick; Russel, Damon; Rust, Michael; Scarabelli, Diego; Selvanayagam, Michael; Sinclair, Rodney; Smith, Robert; Suska, Mark; To, Ting-Wai; Vahidpour, Mehrnoosh; Vodrahalli, Nagesh; Whyland, Tyler; Yadav, Kamal; Zeng, William; Rigetti, Chad T.
2018-01-01
We show that parametric coupling techniques can be used to generate selective entangling interactions for multi-qubit processors. By inducing coherent population exchange between adjacent qubits under frequency modulation, we implement a universal gate set for a linear array of four superconducting qubits. An average process fidelity of ℱ = 93% is estimated for three two-qubit gates via quantum process tomography. We establish the suitability of these techniques for computation by preparing a four-qubit maximally entangled state and comparing the estimated state fidelity with the expected performance of the individual entangling gates. In addition, we prepare an eight-qubit register in all possible bitstring permutations and monitor the fidelity of a two-qubit gate across one pair of these qubits. Across all these permutations, an average fidelity of ℱ = 91.6 ± 2.6% is observed. These results thus offer a path to a scalable architecture with high selectivity and low cross-talk. PMID:29423443
Fundamental Aeronautics Program: Overview of Project Work in Supersonic Cruise Efficiency
NASA Technical Reports Server (NTRS)
Castner, Raymond
2011-01-01
The Supersonics Project, part of NASA?s Fundamental Aeronautics Program, contains a number of technical challenge areas which include sonic boom community response, airport noise, high altitude emissions, cruise efficiency, light weight durable engines/airframes, and integrated multi-discipline system design. This presentation provides an overview of the current (2011) activities in the supersonic cruise efficiency technical challenge, and is focused specifically on propulsion technologies. The intent is to develop and validate high-performance supersonic inlet and nozzle technologies. Additional work is planned for design and analysis tools for highly-integrated low-noise, low-boom applications. If successful, the payoffs include improved technologies and tools for optimized propulsion systems, propulsion technologies for a minimized sonic boom signature, and a balanced approach to meeting efficiency and community noise goals. In this propulsion area, the work is divided into advanced supersonic inlet concepts, advanced supersonic nozzle concepts, low fidelity computational tool development, high fidelity computational tools, and improved sensors and measurement capability. The current work in each area is summarized.
NASA Technical Reports Server (NTRS)
Castner, Ray
2012-01-01
The Supersonics Project, part of NASA's Fundamental Aeronautics Program, contains a number of technical challenge areas which include sonic boom community response, airport noise, high altitude emissions, cruise efficiency, light weight durable engines/airframes, and integrated multi-discipline system design. This presentation provides an overview of the current (2012) activities in the supersonic cruise efficiency technical challenge, and is focused specifically on propulsion technologies. The intent is to develop and validate high-performance supersonic inlet and nozzle technologies. Additional work is planned for design and analysis tools for highly-integrated low-noise, low-boom applications. If successful, the payoffs include improved technologies and tools for optimized propulsion systems, propulsion technologies for a minimized sonic boom signature, and a balanced approach to meeting efficiency and community noise goals. In this propulsion area, the work is divided into advanced supersonic inlet concepts, advanced supersonic nozzle concepts, low fidelity computational tool development, high fidelity computational tools, and improved sensors and measurement capability. The current work in each area is summarized.
Modeling Materials: Design for Planetary Entry, Electric Aircraft, and Beyond
NASA Technical Reports Server (NTRS)
Thompson, Alexander; Lawson, John W.
2014-01-01
NASA missions push the limits of what is possible. The development of high-performance materials must keep pace with the agency's demanding, cutting-edge applications. Researchers at NASA's Ames Research Center are performing multiscale computational modeling to accelerate development times and further the design of next-generation aerospace materials. Multiscale modeling combines several computationally intensive techniques ranging from the atomic level to the macroscale, passing output from one level as input to the next level. These methods are applicable to a wide variety of materials systems. For example: (a) Ultra-high-temperature ceramics for hypersonic aircraft-we utilized the full range of multiscale modeling to characterize thermal protection materials for faster, safer air- and spacecraft, (b) Planetary entry heat shields for space vehicles-we computed thermal and mechanical properties of ablative composites by combining several methods, from atomistic simulations to macroscale computations, (c) Advanced batteries for electric aircraft-we performed large-scale molecular dynamics simulations of advanced electrolytes for ultra-high-energy capacity batteries to enable long-distance electric aircraft service; and (d) Shape-memory alloys for high-efficiency aircraft-we used high-fidelity electronic structure calculations to determine phase diagrams in shape-memory transformations. Advances in high-performance computing have been critical to the development of multiscale materials modeling. We used nearly one million processor hours on NASA's Pleiades supercomputer to characterize electrolytes with a fidelity that would be otherwise impossible. For this and other projects, Pleiades enables us to push the physics and accuracy of our calculations to new levels.
NASA Astrophysics Data System (ADS)
Herrick, Gregory Paul
The quest to accurately capture flow phenomena with length-scales both short and long and to accurately represent complex flow phenomena within disparately sized geometry inspires a need for an efficient, high-fidelity, multi-block structured computational fluid dynamics (CFD) parallel computational scheme. This research presents and demonstrates a more efficient computational method by which to perform multi-block structured CFD parallel computational simulations, thus facilitating higher-fidelity solutions of complicated geometries (due to the inclusion of grids for "small'' flow areas which are often merely modeled) and their associated flows. This computational framework offers greater flexibility and user-control in allocating the resource balance between process count and wall-clock computation time. The principal modifications implemented in this revision consist of a "multiple grid block per processing core'' software infrastructure and an analytic computation of viscous flux Jacobians. The development of this scheme is largely motivated by the desire to simulate axial compressor stall inception with more complete gridding of the flow passages (including rotor tip clearance regions) than has been previously done while maintaining high computational efficiency (i.e., minimal consumption of computational resources), and thus this paradigm shall be demonstrated with an examination of instability in a transonic axial compressor. However, the paradigm presented herein facilitates CFD simulation of myriad previously impractical geometries and flows and is not limited to detailed analyses of axial compressor flows. While the simulations presented herein were technically possible under the previous structure of the subject software, they were much less computationally efficient and thus not pragmatically feasible; the previous research using this software to perform three-dimensional, full-annulus, time-accurate, unsteady, full-stage (with sliding-interface) simulations of rotating stall inception in axial compressors utilized tip clearance periodic models, while the scheme here is demonstrated by a simulation of axial compressor stall inception utilizing gridded rotor tip clearance regions. As will be discussed, much previous research---experimental, theoretical, and computational---has suggested that understanding clearance flow behavior is critical to understanding stall inception, and previous computational research efforts which have used tip clearance models have begged the question, "What about the clearance flows?''. This research begins to address that question.
2013-11-12
Dr. Paramsothy Jayakumar (586) 282-4896 Computational Dynamics Inc. 0 Name of Contractor Computational Dynamics Inc. (CDI) 1809...Dr. Paramsothy Jayakumar TARDEC Computational Dynamics Inc. 1 Project Summary This project aims at addressing and remedying the serious...Shabana, A.A., Jayakumar , P., and Letherwood, M., “Soil Models and Vehicle System Dynamics”, Applied Mechanics Reviews, Vol. 65(4), 2013, doi
NASA Astrophysics Data System (ADS)
Moan, T.
2017-12-01
An overview of integrity management of offshore structures, with emphasis on the oil and gas energy sector, is given. Based on relevant accident experiences and means to control the associated risks, accidents are categorized from a technical-physical as well as human and organizational point of view. Structural risk relates to extreme actions as well as structural degradation. Risk mitigation measures, including adequate design criteria, inspection, repair and maintenance as well as quality assurance and control of engineering processes, are briefly outlined. The current status of risk and reliability methodology to aid decisions in the integrity management is briefly reviewed. Finally, the need to balance the uncertainties in data, methods and computational efforts and the cautious use and quality assurance and control in applying high fidelity methods to avoid human errors, is emphasized, and with a plea to develop both high fidelity as well as efficient, simplified methods for design.
Hua, Ming; Tao, Ming-Jie; Deng, Fu-Guo
2016-02-24
We propose a quantum processor for the scalable quantum computation on microwave photons in distant one-dimensional superconducting resonators. It is composed of a common resonator R acting as a quantum bus and some distant resonators rj coupled to the bus in different positions assisted by superconducting quantum interferometer devices (SQUID), different from previous processors. R is coupled to one transmon qutrit, and the coupling strengths between rj and R can be fully tuned by the external flux through the SQUID. To show the processor can be used to achieve universal quantum computation effectively, we present a scheme to complete the high-fidelity quantum state transfer between two distant microwave-photon resonators and another one for the high-fidelity controlled-phase gate on them. By using the technique for catching and releasing the microwave photons from resonators, our processor may play an important role in quantum communication as well.
Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blyth, Taylor S.; Avramova, Maria
The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR)more » cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.« less
Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF
NASA Astrophysics Data System (ADS)
Blyth, Taylor S.
The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.
High-Fidelity Micromechanics Model Developed for the Response of Multiphase Materials
NASA Technical Reports Server (NTRS)
Aboudi, Jacob; Pindera, Marek-Jerzy; Arnold, Steven M.
2002-01-01
A new high-fidelity micromechanics model has been developed under funding from the NASA Glenn Research Center for predicting the response of multiphase materials with arbitrary periodic microstructures. The model's analytical framework is based on the homogenization technique, but the method of solution for the local displacement and stress fields borrows concepts previously employed in constructing the higher order theory for functionally graded materials. The resulting closed-form macroscopic and microscopic constitutive equations, valid for both uniaxial and multiaxial loading of periodic materials with elastic and inelastic constitutive phases, can be incorporated into a structural analysis computer code. Consequently, this model now provides an alternative, accurate method.
DOT National Transportation Integrated Search
2008-01-01
Computer simulations are often used in aviation studies. These simulation tools may require complex, high-fidelity aircraft models. Since many of the flight models used are third-party developed products, independent validation is desired prior to im...
High-Fidelity Piezoelectric Audio Device
NASA Technical Reports Server (NTRS)
Woodward, Stanley E.; Fox, Robert L.; Bryant, Robert G.
2003-01-01
ModalMax is a very innovative means of harnessing the vibration of a piezoelectric actuator to produce an energy efficient low-profile device with high-bandwidth high-fidelity audio response. The piezoelectric audio device outperforms many commercially available speakers made using speaker cones. The piezoelectric device weighs substantially less (4 g) than the speaker cones which use magnets (10 g). ModalMax devices have extreme fabrication simplicity. The entire audio device is fabricated by lamination. The simplicity of the design lends itself to lower cost. The piezoelectric audio device can be used without its acoustic chambers and thereby resulting in a very low thickness of 0.023 in. (0.58 mm). The piezoelectric audio device can be completely encapsulated, which makes it very attractive for use in wet environments. Encapsulation does not significantly alter the audio response. Its small size (see Figure 1) is applicable to many consumer electronic products, such as pagers, portable radios, headphones, laptop computers, computer monitors, toys, and electronic games. The audio device can also be used in automobile or aircraft sound systems.
Fast CNOT gate between two spatially separated atoms via shortcuts to adiabatic passage.
Liang, Yan; Song, Chong; Ji, Xin; Zhang, Shou
2015-09-07
Quantum logic gate is indispensable to quantum computation. One of the important qubit operations is the quantum controlled-not (CNOT) gate that performs a NOT operation on a target qubit depending on the state of the control qubit. In this paper we present a scheme to realize the quantum CNOT gate between two spatially separated atoms via shortcuts to adiabatic passage. The influence of various decoherence processes on the fidelity is discussed. The strict numerical simulation results show that the fidelity for the CNOT gate is relatively high.
A probability-based approach for assessment of roadway safety hardware.
DOT National Transportation Integrated Search
2017-03-14
This report presents a general probability-based approach for assessment of roadway safety hardware (RSH). It was achieved using a reliability : analysis method and computational techniques. With the development of high-fidelity finite element (FE) m...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
A quasi-3D wire approach to model pulmonary airflow in human airways.
Kannan, Ravishekar; Chen, Z J; Singh, Narender; Przekwas, Andrzej; Delvadia, Renishkumar; Tian, Geng; Walenga, Ross
2017-07-01
The models used for modeling the airflow in the human airways are either 0-dimensional compartmental or full 3-dimensional (3D) computational fluid dynamics (CFD) models. In the former, airways are treated as compartments, and the computations are performed with several assumptions, thereby generating a low-fidelity solution. The CFD method displays extremely high fidelity since the solution is obtained by solving the conservation equations in a physiologically consistent geometry. However, CFD models (1) require millions of degrees of freedom to accurately describe the geometry and to reduce the discretization errors, (2) have convergence problems, and (3) require several days to simulate a few breathing cycles. In this paper, we present a novel, fast-running, and robust quasi-3D wire model for modeling the airflow in the human lung airway. The wire mesh is obtained by contracting the high-fidelity lung airway surface mesh to a system of connected wires, with well-defined radii. The conservation equations are then solved in each wire. These wire meshes have around O(1000) degrees of freedom and hence are 3000 to 25 000 times faster than their CFD counterparts. The 3D spatial nature is also preserved since these wires are contracted out of the actual lung STL surface. The pressure readings between the 2 approaches showed minor difference (maximum error = 15%). In general, this formulation is fast and robust, allows geometric changes, and delivers high-fidelity solutions. Hence, this approach has great potential for more complicated problems including modeling of constricted/diseased lung sections and for calibrating the lung flow resistances through parameter inversion. Copyright © 2016 John Wiley & Sons, Ltd.
Nonadiabatic holonomic quantum computation using Rydberg blockade
NASA Astrophysics Data System (ADS)
Kang, Yi-Hao; Chen, Ye-Hong; Shi, Zhi-Cheng; Huang, Bi-Hua; Song, Jie; Xia, Yan
2018-04-01
In this paper, we propose a scheme for realizing nonadiabatic holonomic computation assisted by two atoms and the shortcuts to adiabaticity (STA). The blockade effect induced by strong Rydberg-mediated interaction between two Rydberg atoms provides us the possibility to simplify the dynamics of the system, and the STA helps us design pulses for implementing the holonomic computation with high fidelity. Numerical simulations show the scheme is noise immune and decoherence resistant. Therefore, the current scheme may provide some useful perspectives for realizing nonadiabatic holonomic computation.
2013-04-11
vehicle dynamics. Unclassified Unclassified Unclassified UU 9 Dr. Paramsothy Jayakumar (586) 282-4896 Computational Dynamics Inc. 0 Name of...Technical Representative Dr. Paramsothy Jayakumar TARDEC Computational Dynamics Inc. 1 Project Summary This project aims at addressing and...applications. This literature review is being summarized and incorporated into the paper. The commentary provided by Dr. Jayakumar was addressed and
A fault-tolerant addressable spin qubit in a natural silicon quantum dot
Takeda, Kenta; Kamioka, Jun; Otsuka, Tomohiro; Yoneda, Jun; Nakajima, Takashi; Delbecq, Matthieu R.; Amaha, Shinichi; Allison, Giles; Kodera, Tetsuo; Oda, Shunri; Tarucha, Seigo
2016-01-01
Fault-tolerant quantum computing requires high-fidelity qubits. This has been achieved in various solid-state systems, including isotopically purified silicon, but is yet to be accomplished in industry-standard natural (unpurified) silicon, mainly as a result of the dephasing caused by residual nuclear spins. This high fidelity can be achieved by speeding up the qubit operation and/or prolonging the dephasing time, that is, increasing the Rabi oscillation quality factor Q (the Rabi oscillation decay time divided by the π rotation time). In isotopically purified silicon quantum dots, only the second approach has been used, leaving the qubit operation slow. We apply the first approach to demonstrate an addressable fault-tolerant qubit using a natural silicon double quantum dot with a micromagnet that is optimally designed for fast spin control. This optimized design allows access to Rabi frequencies up to 35 MHz, which is two orders of magnitude greater than that achieved in previous studies. We find the optimum Q = 140 in such high-frequency range at a Rabi frequency of 10 MHz. This leads to a qubit fidelity of 99.6% measured via randomized benchmarking, which is the highest reported for natural silicon qubits and comparable to that obtained in isotopically purified silicon quantum dot–based qubits. This result can inspire contributions to quantum computing from industrial communities. PMID:27536725
A fault-tolerant addressable spin qubit in a natural silicon quantum dot.
Takeda, Kenta; Kamioka, Jun; Otsuka, Tomohiro; Yoneda, Jun; Nakajima, Takashi; Delbecq, Matthieu R; Amaha, Shinichi; Allison, Giles; Kodera, Tetsuo; Oda, Shunri; Tarucha, Seigo
2016-08-01
Fault-tolerant quantum computing requires high-fidelity qubits. This has been achieved in various solid-state systems, including isotopically purified silicon, but is yet to be accomplished in industry-standard natural (unpurified) silicon, mainly as a result of the dephasing caused by residual nuclear spins. This high fidelity can be achieved by speeding up the qubit operation and/or prolonging the dephasing time, that is, increasing the Rabi oscillation quality factor Q (the Rabi oscillation decay time divided by the π rotation time). In isotopically purified silicon quantum dots, only the second approach has been used, leaving the qubit operation slow. We apply the first approach to demonstrate an addressable fault-tolerant qubit using a natural silicon double quantum dot with a micromagnet that is optimally designed for fast spin control. This optimized design allows access to Rabi frequencies up to 35 MHz, which is two orders of magnitude greater than that achieved in previous studies. We find the optimum Q = 140 in such high-frequency range at a Rabi frequency of 10 MHz. This leads to a qubit fidelity of 99.6% measured via randomized benchmarking, which is the highest reported for natural silicon qubits and comparable to that obtained in isotopically purified silicon quantum dot-based qubits. This result can inspire contributions to quantum computing from industrial communities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, Panagiotis; /Fermilab; Cary, John
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less
NASA Technical Reports Server (NTRS)
Whiffen, Gregory J.
2006-01-01
Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.
Demonstration Of Ultra HI-FI (UHF) Methods
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.
2004-01-01
Computational aero-acoustics (CAA) requires efficient, high-resolution simulation tools. Most current techniques utilize finite-difference approaches because high order accuracy is considered too difficult or expensive to achieve with finite volume or finite element methods. However, a novel finite volume approach (Ultra HI-FI or UHF) which utilizes Hermite fluxes is presented which can achieve both arbitrary accuracy and fidelity in space and time. The technique can be applied to unstructured grids with some loss of fidelity or with multi-block structured grids for maximum efficiency and resolution. In either paradigm, it is possible to resolve ultra-short waves (less than 2 PPW). This is demonstrated here by solving the 4th CAA workshop Category 1 Problem 1.
Enforcing elemental mass and energy balances for reduced order models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, J.; Agarwal, K.; Sharma, P.
2012-01-01
Development of economically feasible gasification and carbon capture, utilization and storage (CCUS) technologies requires a variety of software tools to optimize the designs of not only the key devices involved (e., g., gasifier, CO{sub 2} adsorber) but also the entire power generation system. High-fidelity models such as Computational Fluid Dynamics (CFD) models are capable of accurately simulating the detailed flow dynamics, heat transfer, and chemistry inside the key devices. However, the integration of CFD models within steady-state process simulators, and subsequent optimization of the integrated system, still presents significant challenges due to the scale differences in both time and length,more » as well the high computational cost. A reduced order model (ROM) generated from a high-fidelity model can serve as a bridge between the models of different scales. While high-fidelity models are built upon the principles of mass, momentum, and energy conservations, ROMs are usually developed based on regression-type equations and hence their predictions may violate the mass and energy conservation laws. A high-fidelity model may also have the mass and energy balance problem if it is not tightly converged. Conservations of mass and energy are important when a ROM is integrated to a flowsheet for the process simulation of the entire chemical or power generation system, especially when recycle streams are connected to the modeled device. As a part of the Carbon Capture Simulation Initiative (CCSI) project supported by the U.S. Department of Energy, we developed a software framework for generating ROMs from CFD simulations and integrating them with Process Modeling Environments (PMEs) for system-wide optimization. This paper presents a method to correct the results of a high-fidelity model or a ROM such that the elemental mass and energy are conserved perfectly. Correction factors for the flow rates of individual species in the product streams are solved using a minimization algorithm based on Lagrangian multiplier method. Enthalpies of product streams are also modified to enforce the energy balance. The approach is illustrated for two ROMs, one based on a CFD model of an entrained-flow gasifier and the other based on the CFD model of a multiphase CO{sub 2} adsorber.« less
Coastal Storm Hazards from Virginia to Maine
2015-11-01
study, storm surge, tide, waves, wind, atmospheric pressure, and currents were the dominant storm responses computed. The effect of sea level change on...coastal storm hazards and vulnerability nationally (USACE 2015). NACCS goals also included evaluating the effect of future sea level change (SLC) on...the computed high-fidelity responses included storm surge, astronomical tide, waves, wave effects on water levels, storm duration, wind, currents
AHPCRC (Army High Performance Computing Rsearch Center) Bulletin. Volume 1, Issue 4
2011-01-01
Computational and Mathematical Engineering, Stanford University esgs@stanford.edu (650) 723-3764 Molecular Dynamics Models of Antimicrobial ...simulations using low-fidelity Reynolds-av- eraged models illustrate the limited predictive capabili- ties of these schemes. The predictions for scalar and...driving force. The AHPCRC group has used their models to predict nonuniform concentra- tion profiles across small channels as a result of variations
An addressable quantum dot qubit with fault-tolerant control-fidelity.
Veldhorst, M; Hwang, J C C; Yang, C H; Leenstra, A W; de Ronde, B; Dehollain, J P; Muhonen, J T; Hudson, F E; Itoh, K M; Morello, A; Dzurak, A S
2014-12-01
Exciting progress towards spin-based quantum computing has recently been made with qubits realized using nitrogen-vacancy centres in diamond and phosphorus atoms in silicon. For example, long coherence times were made possible by the presence of spin-free isotopes of carbon and silicon. However, despite promising single-atom nanotechnologies, there remain substantial challenges in coupling such qubits and addressing them individually. Conversely, lithographically defined quantum dots have an exchange coupling that can be precisely engineered, but strong coupling to noise has severely limited their dephasing times and control fidelities. Here, we combine the best aspects of both spin qubit schemes and demonstrate a gate-addressable quantum dot qubit in isotopically engineered silicon with a control fidelity of 99.6%, obtained via Clifford-based randomized benchmarking and consistent with that required for fault-tolerant quantum computing. This qubit has dephasing time T2* = 120 μs and coherence time T2 = 28 ms, both orders of magnitude larger than in other types of semiconductor qubit. By gate-voltage-tuning the electron g*-factor we can Stark shift the electron spin resonance frequency by more than 3,000 times the 2.4 kHz electron spin resonance linewidth, providing a direct route to large-scale arrays of addressable high-fidelity qubits that are compatible with existing manufacturing technologies.
Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mason, B. H.; Walsh, J. L.
2001-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.
High Fidelity Simulations of Plume Impingement to the International Space Station
NASA Technical Reports Server (NTRS)
Lumpkin, Forrest E., III; Marichalar, Jeremiah; Stewart, Benedicte D.
2012-01-01
With the retirement of the Space Shuttle, the United States now depends on recently developed commercial spacecraft to supply the International Space Station (ISS) with cargo. These new vehicles supplement ones from international partners including the Russian Progress, the European Autonomous Transfer Vehicle (ATV), and the Japanese H-II Transfer Vehicle (HTV). Furthermore, to carry crew to the ISS and supplement the capability currently provided exclusively by the Russian Soyuz, new designs and a refinement to a cargo vehicle design are in work. Many of these designs include features such as nozzle scarfing or simultaneous firing of multiple thrusters resulting in complex plumes. This results in a wide variety of complex plumes impinging upon the ISS. Therefore, to ensure safe "proximity operations" near the ISS, the need for accurate and efficient high fidelity simulation of plume impingement to the ISS is as high as ever. A capability combining computational fluid dynamics (CFD) and the Direct Simulation Monte Carlo (DSMC) techniques has been developed to properly model the large density variations encountered as the plume expands from the high pressure in the combustion chamber to the near vacuum conditions at the orbiting altitude of the ISS. Details of the computational tools employed by this method, including recent software enhancements and the best practices needed to achieve accurate simulations, are discussed. Several recent examples of the application of this high fidelity capability are presented. These examples highlight many of the real world, complex features of plume impingement that occur when "visiting vehicles" operate in the vicinity of the ISS.
Raster-Based Approach to Solar Pressure Modeling
NASA Technical Reports Server (NTRS)
Wright, Theodore W. II
2013-01-01
An algorithm has been developed to take advantage of the graphics processing hardware in modern computers to efficiently compute high-fidelity solar pressure forces and torques on spacecraft, taking into account the possibility of self-shading due to the articulation of spacecraft components such as solar arrays. The process is easily extended to compute other results that depend on three-dimensional attitude analysis, such as solar array power generation or free molecular flow drag. The impact of photons upon a spacecraft introduces small forces and moments. The magnitude and direction of the forces depend on the material properties of the spacecraft components being illuminated. The parts of the components being lit depends on the orientation of the craft with respect to the Sun, as well as the gimbal angles for any significant moving external parts (solar arrays, typically). Some components may shield others from the Sun. The purpose of this innovation is to enable high-fidelity computation of solar pressure and power generation effects of illuminated portions of spacecraft, taking self-shading from spacecraft attitude and movable components into account. The key idea in this innovation is to compute results dependent upon complicated geometry by using an image to break the problem into thousands or millions of sub-problems with simple geometry, and then the results from the simpler problems are combined to give high-fidelity results for the full geometry. This process is performed by constructing a 3D model of a spacecraft using an appropriate computer language (OpenGL), and running that model on a modern computer's 3D accelerated video processor. This quickly and accurately generates a view of the model (as shown on a computer screen) that takes rotation and articulation of spacecraft components into account. When this view is interpreted as the spacecraft as seen by the Sun, then only the portions of the craft visible in the view are illuminated. The view as shown on the computer screen is composed of up to millions of pixels. Each of those pixels is associated with a small illuminated area of the spacecraft. For each pixel, it is possible to compute its position, angle (surface normal) from the view direction, and the spacecraft material (and therefore, optical coefficients) associated with that area. With this information, the area associated with each pixel can be modeled as a simple flat plate for calculating solar pressure. The vector sum of these individual flat plate models is a high-fidelity approximation of the solar pressure forces and torques on the whole vehicle. In addition to using optical coefficients associated with each spacecraft material to calculate solar pressure, a power generation coefficient is added for computing solar array power generation from the sum of the illuminated areas. Similarly, other area-based calculations, such as free molecular flow drag, are also enabled. Because the model rendering is separated from other calculations, it is relatively easy to add a new model to explore a new vehicle or mission configuration. Adding a new model is performed by adding OpenGL code, but a future version might read a mesh file exported from a computer-aided design (CAD) system to enable very rapid turnaround for new designs
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Bednarcyk, Brett A.; Waas, Anthony M.; Arnold, Steven M.
2012-01-01
The smeared crack band theory is implemented within the generalized method of cells and high-fidelity generalized method of cells micromechanics models to capture progressive failure within the constituents of a composite material while retaining objectivity with respect to the size of the discretization elements used in the model. An repeating unit cell containing 13 randomly arranged fibers is modeled and subjected to a combination of transverse tension/compression and transverse shear loading. The implementation is verified against experimental data (where available), and an equivalent finite element model utilizing the same implementation of the crack band theory. To evaluate the performance of the crack band theory within a repeating unit cell that is more amenable to a multiscale implementation, a single fiber is modeled with generalized method of cells and high-fidelity generalized method of cells using a relatively coarse subcell mesh which is subjected to the same loading scenarios as the multiple fiber repeating unit cell. The generalized method of cells and high-fidelity generalized method of cells models are validated against a very refined finite element model.
High Fidelity BWR Fuel Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Su Jong
This report describes the Consortium for Advanced Simulation of Light Water Reactors (CASL) work conducted for completion of the Thermal Hydraulics Methods (THM) Level 3 milestone THM.CFD.P13.03: High Fidelity BWR Fuel Simulation. High fidelity computational fluid dynamics (CFD) simulation for Boiling Water Reactor (BWR) was conducted to investigate the applicability and robustness performance of BWR closures. As a preliminary study, a CFD model with simplified Ferrule spacer grid geometry of NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) benchmark has been implemented. Performance of multiphase segregated solver with baseline boiling closures has been evaluated. Although the mean values of void fractionmore » and exit quality of CFD result for BFBT case 4101-61 agreed with experimental data, the local void distribution was not predicted accurately. The mesh quality was one of the critical factors to obtain converged result. The stability and robustness of the simulation was mainly affected by the mesh quality, combination of BWR closure models. In addition, the CFD modeling of fully-detailed spacer grid geometry with mixing vane is necessary for improving the accuracy of CFD simulation.« less
Experimental investigation of a four-qubit linear-optical quantum logic circuit
NASA Astrophysics Data System (ADS)
Stárek, R.; Mičuda, M.; Miková, M.; Straka, I.; Dušek, M.; Ježek, M.; Fiurášek, J.
2016-09-01
We experimentally demonstrate and characterize a four-qubit linear-optical quantum logic circuit. Our robust and versatile scheme exploits encoding of two qubits into polarization and path degrees of single photons and involves two crossed inherently stable interferometers. This approach allows us to design a complex quantum logic circuit that combines a genuine four-qubit C3Z gate and several two-qubit and single-qubit gates. The C3Z gate introduces a sign flip if and only if all four qubits are in the computational state |1>. We verify high-fidelity performance of this central four-qubit gate using Hofmann bounds on quantum gate fidelity and Monte Carlo fidelity sampling. We also experimentally demonstrate that the quantum logic circuit can generate genuine multipartite entanglement and we certify the entanglement with the use of suitably tailored entanglement witnesses.
Experimental investigation of a four-qubit linear-optical quantum logic circuit.
Stárek, R; Mičuda, M; Miková, M; Straka, I; Dušek, M; Ježek, M; Fiurášek, J
2016-09-20
We experimentally demonstrate and characterize a four-qubit linear-optical quantum logic circuit. Our robust and versatile scheme exploits encoding of two qubits into polarization and path degrees of single photons and involves two crossed inherently stable interferometers. This approach allows us to design a complex quantum logic circuit that combines a genuine four-qubit C(3)Z gate and several two-qubit and single-qubit gates. The C(3)Z gate introduces a sign flip if and only if all four qubits are in the computational state |1〉. We verify high-fidelity performance of this central four-qubit gate using Hofmann bounds on quantum gate fidelity and Monte Carlo fidelity sampling. We also experimentally demonstrate that the quantum logic circuit can generate genuine multipartite entanglement and we certify the entanglement with the use of suitably tailored entanglement witnesses.
High Fidelity Modeling of Field Reversed Configuration (FRC) Thrusters
2015-10-01
dispersion depends on the Riemann solver • Variables are allowed to be discontinuous at the cell interfaces Advantages - Method is conservative...release; distribution unlimited Discontinuous Galerkin (2) • Riemann problems are solved at each interface to compute fluxes • The source of dissipation
Fick, Lambert H.; Merzari, Elia; Hassan, Yassin A.
2017-02-20
Computational analyses of fluid flow through packed pebble bed domains using the Reynolds-averaged NavierStokes framework have had limited success in the past. Because of a lack of high-fidelity experimental or computational data, optimization of Reynolds-averaged closure models for these geometries has not been extensively developed. In the present study, direct numerical simulation was employed to develop a high-fidelity database that can be used for optimizing Reynolds-averaged closure models for pebble bed flows. A face-centered cubic domain with periodic boundaries was used. Flow was simulated at a Reynolds number of 9308 and cross-verified by using available quasi-DNS data. During the simulations,more » low-frequency instability modes were observed that affected the stationary solution. Furthermore, these instabilities were investigated by using the method of proper orthogonal decomposition, and a correlation was found between the time-dependent asymmetry of the averaged velocity profile data and the behavior of the highest energy eigenmodes.« less
Stabilization Approaches for Linear and Nonlinear Reduced Order Models
NASA Astrophysics Data System (ADS)
Rezaian, Elnaz; Wei, Mingjun
2017-11-01
It has been a major concern to establish reduced order models (ROMs) as reliable representatives of the dynamics inherent in high fidelity simulations, while fast computation is achieved. In practice it comes to stability and accuracy of ROMs. Given the inviscid nature of Euler equations it becomes more challenging to achieve stability, especially where moving discontinuities exist. Originally unstable linear and nonlinear ROMs are stabilized here by two approaches. First, a hybrid method is developed by integrating two different stabilization algorithms. At the same time, symmetry inner product is introduced in the generation of ROMs for its known robust behavior for compressible flows. Results have shown a notable improvement in computational efficiency and robustness compared to similar approaches. Second, a new stabilization algorithm is developed specifically for nonlinear ROMs. This method adopts Particle Swarm Optimization to enforce a bounded ROM response for minimum discrepancy between the high fidelity simulation and the ROM outputs. Promising results are obtained in its application on the nonlinear ROM of an inviscid fluid flow with discontinuities. Supported by ARL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fick, Lambert H.; Merzari, Elia; Hassan, Yassin A.
Computational analyses of fluid flow through packed pebble bed domains using the Reynolds-averaged NavierStokes framework have had limited success in the past. Because of a lack of high-fidelity experimental or computational data, optimization of Reynolds-averaged closure models for these geometries has not been extensively developed. In the present study, direct numerical simulation was employed to develop a high-fidelity database that can be used for optimizing Reynolds-averaged closure models for pebble bed flows. A face-centered cubic domain with periodic boundaries was used. Flow was simulated at a Reynolds number of 9308 and cross-verified by using available quasi-DNS data. During the simulations,more » low-frequency instability modes were observed that affected the stationary solution. Furthermore, these instabilities were investigated by using the method of proper orthogonal decomposition, and a correlation was found between the time-dependent asymmetry of the averaged velocity profile data and the behavior of the highest energy eigenmodes.« less
Quantum information processing with long-wavelength radiation
NASA Astrophysics Data System (ADS)
Murgia, David; Weidt, Sebastian; Randall, Joseph; Lekitsch, Bjoern; Webster, Simon; Navickas, Tomas; Grounds, Anton; Rodriguez, Andrea; Webb, Anna; Standing, Eamon; Pearce, Stuart; Sari, Ibrahim; Kiang, Kian; Rattanasonti, Hwanjit; Kraft, Michael; Hensinger, Winfried
To this point, the entanglement of ions has predominantly been performed using lasers. Using long wavelength radiation with static magnetic field gradients provides an architecture to simplify construction of a large scale quantum computer. The use of microwave-dressed states protects against decoherence from fluctuating magnetic fields, with radio-frequency fields used for qubit manipulation. I will report the realisation of spin-motion entanglement using long-wavelength radiation, and a new method to efficiently prepare dressed-state qubits and qutrits, reducing experimental complexity of gate operations. I will also report demonstration of ground state cooling using long wavelength radiation, which may increase two-qubit entanglement fidelity. I will then report demonstration of a high-fidelity long-wavelength two-ion quantum gate using dressed states. Combining these results with microfabricated ion traps allows for scaling towards a large scale ion trap quantum computer, and provides a platform for quantum simulations of fundamental physics. I will report progress towards the operation of microchip ion traps with extremely high magnetic field gradients for multi-ion quantum gates.
Multifidelity, Multidisciplinary Design Under Uncertainty with Non-Intrusive Polynomial Chaos
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Gumbert, Clyde
2017-01-01
The primary objective of this work is to develop an approach for multifidelity uncertainty quantification and to lay the framework for future design under uncertainty efforts. In this study, multifidelity is used to describe both the fidelity of the modeling of the physical systems, as well as the difference in the uncertainty in each of the models. For computational efficiency, a multifidelity surrogate modeling approach based on non-intrusive polynomial chaos using the point-collocation technique is developed for the treatment of both multifidelity modeling and multifidelity uncertainty modeling. Two stochastic model problems are used to demonstrate the developed methodologies: a transonic airfoil model and multidisciplinary aircraft analysis model. The results of both showed the multifidelity modeling approach was able to predict the output uncertainty predicted by the high-fidelity model as a significant reduction in computational cost.
Using Cryptography to Improve Conjunction Analysis
NASA Astrophysics Data System (ADS)
Hemenway, B.; Welser, B.; Baiocchi, D.
2012-09-01
Coordination of operations between satellite operators is becoming increasingly important to prevent collisions. Unfortunately, this coordination is often handicapped by a lack of trust. Coordination and cooperation between satellite operators can take many forms, however, one specific area where cooperation between operators would yield significant benefits is in the computation of conjunction analyses. Passively collected orbital are of generally of too low fidelity to be of use in conjunction analyses. Each operator, however, maintains high fidelity data about their own satellites. These high fidelity data are significantly more valuable in calculating conjunction analyses than the lower-fidelity data. If operators were to share their high fidelity data overall space situational awareness could be improved. At present, many operators do not share data and as a consequence space situational awareness suffers. Restrictive data sharing policies are primarily motivated by privacy concerns on the part of the satellite operators, as each operator is reluctant or unwilling to share data that might compromise its political or commercial interests. In order to perform the necessary conjunction analyses while still maintaining the privacy of their own data, a few operators have entered data sharing agreements. These operators provide their private data to a trusted outside party, who then performs the conjunction analyses and reports the results to the operators. These types of agreements are not an ideal solution as they require a degree of trust between the parties, and the cost of employing the trusted party can be large. In this work, we present and analyze cryptographic tools that would allow satellite operators to securely calculate conjunction analyses without the help of a trusted outside party, while provably maintaining the privacy of their own orbital information. For example, recent advances in cryptographic protocols, specifically in the area of secure Multiparty Computation (MPC) have the potential to allow satellite operators to perform the necessary conjunction analyses without the need to reveal their orbital information to anyone. This talk will describe how MPC works, and how we propose to use it to facilitate secure information sharing between satellite operators.
NASA Astrophysics Data System (ADS)
Yamaguchi, Masahiro; Haneishi, Hideaki; Fukuda, Hiroyuki; Kishimoto, Junko; Kanazawa, Hiroshi; Tsuchida, Masaru; Iwama, Ryo; Ohyama, Nagaaki
2006-01-01
In addition to the great advancement of high-resolution and large-screen imaging technology, the issue of color is now receiving considerable attention as another aspect than the image resolution. It is difficult to reproduce the original color of subject in conventional imaging systems, and that obstructs the applications of visual communication systems in telemedicine, electronic commerce, and digital museum. To breakthrough the limitation of conventional RGB 3-primary systems, "Natural Vision" project aims at an innovative video and still-image communication technology with high-fidelity color reproduction capability, based on spectral information. This paper summarizes the results of NV project including the development of multispectral and multiprimary imaging technologies and the experimental investigations on the applications to medicine, digital archives, electronic commerce, and computer graphics.
The Computing And Interdisciplinary Systems Office: Annual Review and Planning Meeting
NASA Technical Reports Server (NTRS)
Lytle, John K.
2003-01-01
The goal of this research is to develop an advanced engineering analysis system that enables high-fidelity, multi-disciplinary, full propulsion system simulations to be performed early in the design process (a virtual test cell that integrates propulsion and information technologies). This will enable rapid, high-confidence, cost-effective design of revolutionary systems.
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; ...
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Quantum computation with realistic magic-state factories
NASA Astrophysics Data System (ADS)
O'Gorman, Joe; Campbell, Earl T.
2017-03-01
Leading approaches to fault-tolerant quantum computation dedicate a significant portion of the hardware to computational factories that churn out high-fidelity ancillas called magic states. Consequently, efficient and realistic factory design is of paramount importance. Here we present the most detailed resource assessment to date of magic-state factories within a surface code quantum computer, along the way introducing a number of techniques. We show that the block codes of Bravyi and Haah [Phys. Rev. A 86, 052329 (2012), 10.1103/PhysRevA.86.052329] have been systematically undervalued; we track correlated errors both numerically and analytically, providing fidelity estimates without appeal to the union bound. We also introduce a subsystem code realization of these protocols with constant time and low ancilla cost. Additionally, we confirm that magic-state factories have space-time costs that scale as a constant factor of surface code costs. We find that the magic-state factory required for postclassical factoring can be as small as 6.3 million data qubits, ignoring ancilla qubits, assuming 10-4 error gates and the availability of long-range interactions.
Applications of Parallel Process HiMAP for Large Scale Multidisciplinary Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Potsdam, Mark; Rodriguez, David; Kwak, Dochay (Technical Monitor)
2000-01-01
HiMAP is a three level parallel middleware that can be interfaced to a large scale global design environment for code independent, multidisciplinary analysis using high fidelity equations. Aerospace technology needs are rapidly changing. Computational tools compatible with the requirements of national programs such as space transportation are needed. Conventional computation tools are inadequate for modern aerospace design needs. Advanced, modular computational tools are needed, such as those that incorporate the technology of massively parallel processors (MPP).
High-Fidelity Computational Aerodynamics of Multi-Rotor Unmanned Aerial Vehicles
NASA Technical Reports Server (NTRS)
Ventura Diaz, Patricia; Yoon, Seokkwan
2018-01-01
High-fidelity Computational Fluid Dynamics (CFD) simulations have been carried out for several multi-rotor Unmanned Aerial Vehicles (UAVs). Three vehicles have been studied: the classic quadcopter DJI Phantom 3, an unconventional quadcopter specialized for forward flight, the SUI Endurance, and an innovative concept for Urban Air Mobility (UAM), the Elytron 4S UAV. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, and a hybrid turbulence model. The DJI Phantom 3 is simulated with different rotors and with both a simplified airframe and the real airframe including landing gear and a camera. The effects of weather are studied for the DJI Phantom 3 quadcopter in hover. The SUI En- durance original design is compared in forward flight to a new configuration conceived by the authors, the hybrid configuration, which gives a large improvement in forward thrust. The Elytron 4S UAV is simulated in helicopter mode and in airplane mode. Understanding the complex flows in multi-rotor vehicles will help design quieter, safer, and more efficient future drones and UAM vehicles.
Experimental entanglement purification of arbitrary unknown states.
Pan, Jian-Wei; Gasparoni, Sara; Ursin, Rupert; Weihs, Gregor; Zeilinger, Anton
2003-05-22
Distribution of entangled states between distant locations is essential for quantum communication over large distances. But owing to unavoidable decoherence in the quantum communication channel, the quality of entangled states generally decreases exponentially with the channel length. Entanglement purification--a way to extract a subset of states of high entanglement and high purity from a large set of less entangled states--is thus needed to overcome decoherence. Besides its important application in quantum communication, entanglement purification also plays a crucial role in error correction for quantum computation, because it can significantly increase the quality of logic operations between different qubits. Here we demonstrate entanglement purification for general mixed states of polarization-entangled photons using only linear optics. Typically, one photon pair of fidelity 92% could be obtained from two pairs, each of fidelity 75%. In our experiments, decoherence is overcome to the extent that the technique would achieve tolerable error rates for quantum repeaters in long-distance quantum communication. Our results also imply that the requirement of high-accuracy logic operations in fault-tolerant quantum computation can be considerably relaxed.
Automated Euler and Navier-Stokes Database Generation for a Glide-Back Booster
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.; Rogers, Stuart E.; Aftosmis, Mike J.; Pandya, Shishir A.; Ahmad, Jasim U.; Tejnil, Edward
2004-01-01
The past two decades have seen a sustained increase in the use of high fidelity Computational Fluid Dynamics (CFD) in basic research, aircraft design, and the analysis of post-design issues. As the fidelity of a CFD method increases, the number of cases that can be readily and affordably computed greatly diminishes. However, computer speeds now exceed 2 GHz, hundreds of processors are currently available and more affordable, and advances in parallel CFD algorithms scale more readily with large numbers of processors. All of these factors make it feasible to compute thousands of high fidelity cases. However, there still remains the overwhelming task of monitoring the solution process. This paper presents an approach to automate the CFD solution process. A new software tool, AeroDB, is used to compute thousands of Euler and Navier-Stokes solutions for a 2nd generation glide-back booster in one week. The solution process exploits a common job-submission grid environment, the NASA Information Power Grid (IPG), using 13 computers located at 4 different geographical sites. Process automation and web-based access to a MySql database greatly reduces the user workload, removing much of the tedium and tendency for user input errors. The AeroDB framework is shown. The user submits/deletes jobs, monitors AeroDB's progress, and retrieves data and plots via a web portal. Once a job is in the database, a job launcher uses an IPG resource broker to decide which computers are best suited to run the job. Job/code requirements, the number of CPUs free on a remote system, and queue lengths are some of the parameters the broker takes into account. The Globus software provides secure services for user authentication, remote shell execution, and secure file transfers over an open network. AeroDB automatically decides when a job is completed. Currently, the Cart3D unstructured flow solver is used for the Euler equations, and the Overflow structured overset flow solver is used for the Navier-Stokes equations. Other codes can be readily included into the AeroDB framework.
Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.
2004-01-01
Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.
NASA Technical Reports Server (NTRS)
Derkevorkian, Armen; Peterson, Lee; Kolaini, Ali R.; Hendricks, Terry J.; Nesmith, Bill J.
2016-01-01
An analytic approach is demonstrated to reveal potential pyroshock -driven dynamic effects causing power losses in the Thermo -Electric (TE) module bars of the Mars Science Laboratory (MSL) Multi -Mission Radioisotope Thermoelectric Generator (MMRTG). This study utilizes high- fidelity finite element analysis with SIERRA/PRESTO codes to estimate wave propagation effects due to large -amplitude suddenly -applied pyro shock loads in the MMRTG. A high fidelity model of the TE module bar was created with approximately 30 million degrees -of-freedom (DOF). First, a quasi -static preload was applied on top of the TE module bar, then transient tri- axial acceleration inputs were simultaneously applied on the preloaded module. The applied input acceleration signals were measured during MMRTG shock qualification tests performed at the Jet Propulsion Laboratory. An explicit finite element solver in the SIERRA/PRESTO computational environment, along with a 3000 processor parallel super -computing framework at NASA -AMES, was used for the simulation. The simulation results were investigated both qualitatively and quantitatively. The predicted shock wave propagation results provide detailed structural responses throughout the TE module bar, and key insights into the dynamic response (i.e., loads, displacements, accelerations) of critical internal spring/piston compression systems, TE materials, and internal component interfaces in the MMRTG TE module bar. They also provide confidence on the viability of this high -fidelity modeling scheme to accurately predict shock wave propagation patterns within complex structures. This analytic approach is envisioned for modeling shock sensitive hardware susceptible to intense shock environments positioned near shock separation devices in modern space vehicles and systems.
A Mixed-Fidelity Approach for Design of Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Li, Wu; Shields, Elwood; Geiselhart, Karl A.
2010-01-01
This paper documents a mixed-fidelity approach for the design of low-boom supersonic aircraft as a viable approach for designing a practical low-boom supersonic configuration. A low-boom configuration that is based on low-fidelity analysis is used as the baseline. Tail lift is included to help tailor the aft portion of the ground signature. A comparison of low- and high-fidelity analysis results demonstrates the necessity of using computational fluid dynamics (CFD) analysis in a low-boom supersonic configuration design process. The fuselage shape is modified iteratively to obtain a configuration with a CFD equivalent-area distribution that matches a predetermined low-boom target distribution. The mixed-fidelity approach can easily refine the low-fidelity low-boom baseline into a low-boom configuration with the use of CFD equivalent-area analysis. The ground signature of the final configuration is calculated by using a state-of-the-art CFD-based boom analysis method that generates accurate midfield pressure distributions for propagation to the ground with ray tracing. The ground signature that is propagated from a midfield pressure distribution has a shaped ramp front, which is similar to the ground signature that is propagated from the CFD equivalent-area distribution. This result confirms the validity of the low-boom supersonic configuration design by matching a low-boom equivalent-area target, which is easier to accomplish than matching a low-boom midfield pressure target.
Universal non-adiabatic geometric manipulation of pseudo-spin charge qubits
NASA Astrophysics Data System (ADS)
Azimi Mousolou, Vahid
2017-01-01
Reliable quantum information processing requires high-fidelity universal manipulation of quantum systems within the characteristic coherence times. Non-adiabatic holonomic quantum computation offers a promising approach to implement fast, universal, and robust quantum logic gates particularly useful in nano-fabricated solid-state architectures, which typically have short coherence times. Here, we propose an experimentally feasible scheme to realize high-speed universal geometric quantum gates in nano-engineered pseudo-spin charge qubits. We use a system of three coupled quantum dots containing a single electron, where two computational states of a double quantum dot charge qubit interact through an intermediate quantum dot. The additional degree of freedom introduced into the qubit makes it possible to create a geometric model system, which allows robust and efficient single-qubit rotations through careful control of the inter-dot tunneling parameters. We demonstrate that a capacitive coupling between two charge qubits permits a family of non-adiabatic holonomic controlled two-qubit entangling gates, and thus provides a promising procedure to maintain entanglement in charge qubits and a pathway toward fault-tolerant universal quantum computation. We estimate the feasibility of the proposed structure by analyzing the gate fidelities to some extent.
NASA Astrophysics Data System (ADS)
Crowell, Andrew Rippetoe
This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two-pronged approach is found to exhibit balanced performance in terms of accuracy and computational expense, relative to several existing approaches. This approach enables CFD-based loads to be implemented into long duration fluid-thermal-structural simulations.
Engineering integrated photonics for heralded quantum gates
NASA Astrophysics Data System (ADS)
Meany, Thomas; Biggerstaff, Devon N.; Broome, Matthew A.; Fedrizzi, Alessandro; Delanty, Michael; Steel, M. J.; Gilchrist, Alexei; Marshall, Graham D.; White, Andrew G.; Withford, Michael J.
2016-06-01
Scaling up linear-optics quantum computing will require multi-photon gates which are compact, phase-stable, exhibit excellent quantum interference, and have success heralded by the detection of ancillary photons. We investigate the design, fabrication and characterisation of the optimal known gate scheme which meets these requirements: the Knill controlled-Z gate, implemented in integrated laser-written waveguide arrays. We show device performance to be less sensitive to phase variations in the circuit than to small deviations in the coupler reflectivity, which are expected given the tolerance values of the fabrication method. The mode fidelity is also shown to be less sensitive to reflectivity and phase errors than the process fidelity. Our best device achieves a fidelity of 0.931 ± 0.001 with the ideal 4 × 4 unitary circuit and a process fidelity of 0.680 ± 0.005 with the ideal computational-basis process.
Engineering integrated photonics for heralded quantum gates
Meany, Thomas; Biggerstaff, Devon N.; Broome, Matthew A.; Fedrizzi, Alessandro; Delanty, Michael; Steel, M. J.; Gilchrist, Alexei; Marshall, Graham D.; White, Andrew G.; Withford, Michael J.
2016-01-01
Scaling up linear-optics quantum computing will require multi-photon gates which are compact, phase-stable, exhibit excellent quantum interference, and have success heralded by the detection of ancillary photons. We investigate the design, fabrication and characterisation of the optimal known gate scheme which meets these requirements: the Knill controlled-Z gate, implemented in integrated laser-written waveguide arrays. We show device performance to be less sensitive to phase variations in the circuit than to small deviations in the coupler reflectivity, which are expected given the tolerance values of the fabrication method. The mode fidelity is also shown to be less sensitive to reflectivity and phase errors than the process fidelity. Our best device achieves a fidelity of 0.931 ± 0.001 with the ideal 4 × 4 unitary circuit and a process fidelity of 0.680 ± 0.005 with the ideal computational-basis process. PMID:27282928
Engineering integrated photonics for heralded quantum gates.
Meany, Thomas; Biggerstaff, Devon N; Broome, Matthew A; Fedrizzi, Alessandro; Delanty, Michael; Steel, M J; Gilchrist, Alexei; Marshall, Graham D; White, Andrew G; Withford, Michael J
2016-06-10
Scaling up linear-optics quantum computing will require multi-photon gates which are compact, phase-stable, exhibit excellent quantum interference, and have success heralded by the detection of ancillary photons. We investigate the design, fabrication and characterisation of the optimal known gate scheme which meets these requirements: the Knill controlled-Z gate, implemented in integrated laser-written waveguide arrays. We show device performance to be less sensitive to phase variations in the circuit than to small deviations in the coupler reflectivity, which are expected given the tolerance values of the fabrication method. The mode fidelity is also shown to be less sensitive to reflectivity and phase errors than the process fidelity. Our best device achieves a fidelity of 0.931 ± 0.001 with the ideal 4 × 4 unitary circuit and a process fidelity of 0.680 ± 0.005 with the ideal computational-basis process.
Robustness of high-fidelity Rydberg gates with single-site addressability
NASA Astrophysics Data System (ADS)
Goerz, Michael H.; Halperin, Eli J.; Aytac, Jon M.; Koch, Christiane P.; Whaley, K. Birgitta
2014-09-01
Controlled-phase (cphase) gates can be realized with trapped neutral atoms by making use of the Rydberg blockade. Achieving the ultrahigh fidelities required for quantum computation with such Rydberg gates, however, is compromised by experimental inaccuracies in pulse amplitudes and timings, as well as by stray fields that cause fluctuations of the Rydberg levels. We report here a comparative study of analytic and numerical pulse sequences for the Rydberg cphase gate that specifically examines the robustness of the gate fidelity with respect to such experimental perturbations. Analytical pulse sequences of both simultaneous and stimulated Raman adiabatic passage (STIRAP) are found to be at best moderately robust under these perturbations. In contrast, optimal control theory is seen to allow generation of numerical pulses that are inherently robust within a predefined tolerance window. The resulting numerical pulse shapes display simple modulation patterns and can be rationalized in terms of an interference between distinct two-photon Rydberg excitation pathways. Pulses of such low complexity should be experimentally feasible, allowing gate fidelities of order 99.90-99.99% to be achievable under realistic experimental conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, J.; Lacava, W.; Austin, J.
2015-02-01
This work investigates the minimum level of fidelity required to accurately simulate wind turbine gearboxes using state-of-the-art design tools. Excessive model fidelity including drivetrain complexity, gearbox complexity, excitation sources, and imperfections, significantly increases computational time, but may not provide a commensurate increase in the value of the results. Essential designparameters are evaluated, including the planetary load-sharing factor, gear tooth load distribution, and sun orbit motion. Based on the sensitivity study results, recommendations for the minimum model fidelities are provided.
Resonantly driven CNOT gate for electron spins.
Zajac, D M; Sigillito, A J; Russ, M; Borjans, F; Taylor, J M; Burkard, G; Petta, J R
2018-01-26
Single-qubit rotations and two-qubit CNOT operations are crucial ingredients for universal quantum computing. Although high-fidelity single-qubit operations have been achieved using the electron spin degree of freedom, realizing a robust CNOT gate has been challenging because of rapid nuclear spin dephasing and charge noise. We demonstrate an efficient resonantly driven CNOT gate for electron spins in silicon. Our platform achieves single-qubit rotations with fidelities greater than 99%, as verified by randomized benchmarking. Gate control of the exchange coupling allows a quantum CNOT gate to be implemented with resonant driving in ~200 nanoseconds. We used the CNOT gate to generate a Bell state with 78% fidelity (corrected for errors in state preparation and measurement). Our quantum dot device architecture enables multi-qubit algorithms in silicon. Copyright © 2018, The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Small convolution kernels for high-fidelity image restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1991-01-01
An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.
Experimental investigation of a four-qubit linear-optical quantum logic circuit
Stárek, R.; Mičuda, M.; Miková, M.; Straka, I.; Dušek, M.; Ježek, M.; Fiurášek, J.
2016-01-01
We experimentally demonstrate and characterize a four-qubit linear-optical quantum logic circuit. Our robust and versatile scheme exploits encoding of two qubits into polarization and path degrees of single photons and involves two crossed inherently stable interferometers. This approach allows us to design a complex quantum logic circuit that combines a genuine four-qubit C3Z gate and several two-qubit and single-qubit gates. The C3Z gate introduces a sign flip if and only if all four qubits are in the computational state |1〉. We verify high-fidelity performance of this central four-qubit gate using Hofmann bounds on quantum gate fidelity and Monte Carlo fidelity sampling. We also experimentally demonstrate that the quantum logic circuit can generate genuine multipartite entanglement and we certify the entanglement with the use of suitably tailored entanglement witnesses. PMID:27647176
NASA Astrophysics Data System (ADS)
Dai, Yan-Wei; Hu, Bing-Quan; Zhao, Jian-Hui; Zhou, Huan-Qiang
2010-09-01
The ground-state fidelity per lattice site is computed for the quantum three-state Potts model in a transverse magnetic field on an infinite-size lattice in one spatial dimension in terms of the infinite matrix product state algorithm. It is found that, on the one hand, a pinch point is identified on the fidelity surface around the critical point, and on the other hand, the ground-state fidelity per lattice site exhibits bifurcations at pseudo critical points for different values of the truncation dimension, which in turn approach the critical point as the truncation dimension becomes large. This implies that the ground-state fidelity per lattice site enables us to capture spontaneous symmetry breaking when the control parameter crosses the critical value. In addition, a finite-entanglement scaling of the von Neumann entropy is performed with respect to the truncation dimension, resulting in a precise determination of the central charge at the critical point. Finally, we compute the transverse magnetization, from which the critical exponent β is extracted from the numerical data.
Initial Development of a Quadcopter Simulation Environment for Auralization
NASA Technical Reports Server (NTRS)
Christian, Andrew; Lawrence, Joseph
2016-01-01
This paper describes a recently created computer simulation of quadcopter flight dynamics for the NASA DELIVER project. The goal of this effort is to produce a simulation that includes a number of physical effects that are not usually found in other dynamics simulations (e.g., those used for flight controller development). These effects will be shown to have a significant impact on the fidelity of auralizations - entirely synthetic time-domain predictions of sound - based on this simulation when compared to a recording. High-fidelity auralizations are an important precursor to human subject tests that seek to understand the impact of vehicle configurations on noise and annoyance.
Realization of reliable solid-state quantum memory for photonic polarization qubit.
Zhou, Zong-Quan; Lin, Wei-Bin; Yang, Ming; Li, Chuan-Feng; Guo, Guang-Can
2012-05-11
Faithfully storing an unknown quantum light state is essential to advanced quantum communication and distributed quantum computation applications. The required quantum memory must have high fidelity to improve the performance of a quantum network. Here we report the reversible transfer of photonic polarization states into collective atomic excitation in a compact solid-state device. The quantum memory is based on an atomic frequency comb (AFC) in rare-earth ion-doped crystals. We obtain up to 0.999 process fidelity for the storage and retrieval process of single-photon-level coherent pulse. This reliable quantum memory is a crucial step toward quantum networks based on solid-state devices.
Wang, Carolyn L; Chinnugounder, Sankar; Hippe, Daniel S; Zaidi, Sadaf; O'Malley, Ryan B; Bhargava, Puneet; Bush, William H
2017-01-01
To assess the performance of interprofessional teams of radiologists, technologists, and nurses trained with high-fidelity hands-on (HO) simulation and computer-based (CB) simulation training for contrast reaction management (CR) and teamwork skills (TS). Nurses, technologists, and radiology residents were randomized into 11 teams of three (one of each). Six teams underwent HO training and five underwent CB training for CR and TS. Participants took written tests before and after training and were further tested using a high-fidelity simulation scenario. HO and CB groups scored similarly on all written tests and each showed improvement after training (P = .002 and P = .018, respectively). During the final scenario test, HO teams tended to receive higher grades than CB teams on CR (95% versus 81%, P = .17) and made fewer errors in epinephrine administration (0/6 versus 2/5, P = .18). HO and CB teams scored similarly on TS (51% versus 52%, P = .66), but overall scores were lower for TS than for CR skills in both the HO (P = .03) and CB teams (P = .06). HO training was more highly rated than CB as an effective educational tool (P = .01) and for effectiveness at teaching CR and team communication skills (P = .02). High-fidelity simulation can be used to both train and test interprofessional teams of radiologists, technologists, and nurses for both CR and TS and is more highly rated as an effective educational tool by participants than similar CB training. However, a single session of either type of training may be inadequate for mastering TS. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Detailed description of the HP-9825A HFRMP trajectory processor (TRAJ)
NASA Technical Reports Server (NTRS)
Kindall, S. M.; Wilson, S. W.
1979-01-01
The computer code for the trajectory processor of the HP-9825A High Fidelity Relative Motion Program is described in detail. The processor is a 12-degrees-of-freedom trajectory integrator which can be used to generate digital and graphical data describing the relative motion of the Space Shuttle Orbiter and a free-flying cylindrical payload. Coding standards and flow charts are given and the computational logic is discussed.
Gradient ascent pulse engineering approach to CNOT gates in donor electron spin quantum computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, D.-B.; Goan, H.-S.
2008-11-07
In this paper, we demonstrate how gradient ascent pulse engineering (GRAPE) optimal control methods can be implemented on donor electron spin qubits in semiconductors with an architecture complementary to the original Kane's proposal. We focus on the high fidelity controlled-NOT (CNOT) gate and we explicitly find the digitized control sequences for a controlled-NOT gate by optimizing its fidelity using the effective, reduced donor electron spin Hamiltonian with external controls over the hyperfine A and exchange J interactions. We then simulate the CNOT-gate sequence with the full spin Hamiltonian and find that it has an error of 10{sup -6} that ismore » below the error threshold of 10{sup -4} required for fault-tolerant quantum computation. Also the CNOT gate operation time of 100 ns is 3 times faster than 297 ns of the proposed global control scheme.« less
Real-Time Hardware-in-the-Loop Simulation of Ares I Launch Vehicle
NASA Technical Reports Server (NTRS)
Tobbe, Patrick; Matras, Alex; Walker, David; Wilson, Heath; Fulton, Chris; Alday, Nathan; Betts, Kevin; Hughes, Ryan; Turbe, Michael
2009-01-01
The Ares Real-Time Environment for Modeling, Integration, and Simulation (ARTEMIS) has been developed for use by the Ares I launch vehicle System Integration Laboratory at the Marshall Space Flight Center. The primary purpose of the Ares System Integration Laboratory is to test the vehicle avionics hardware and software in a hardware - in-the-loop environment to certify that the integrated system is prepared for flight. ARTEMIS has been designed to be the real-time simulation backbone to stimulate all required Ares components for verification testing. ARTE_VIIS provides high -fidelity dynamics, actuator, and sensor models to simulate an accurate flight trajectory in order to ensure realistic test conditions. ARTEMIS has been designed to take advantage of the advances in underlying computational power now available to support hardware-in-the-loop testing to achieve real-time simulation with unprecedented model fidelity. A modular realtime design relying on a fully distributed computing architecture has been implemented.
Evolving Storage and Cyber Infrastructure at the NASA Center for Climate Simulation
NASA Technical Reports Server (NTRS)
Salmon, Ellen; Duffy, Daniel; Spear, Carrie; Sinno, Scott; Vaughan, Garrison; Bowen, Michael
2018-01-01
This talk will describe recent developments at the NASA Center for Climate Simulation, which is funded by NASAs Science Mission Directorate, and supports the specialized data storage and computational needs of weather, ocean, and climate researchers, as well as astrophysicists, heliophysicists, and planetary scientists. To meet requirements for higher-resolution, higher-fidelity simulations, the NCCS augments its High Performance Computing (HPC) and storage retrieval environment. As the petabytes of model and observational data grow, the NCCS is broadening data services offerings and deploying and expanding virtualization resources for high performance analytics.
An ultra-low-cost moving-base driving simulator
DOT National Transportation Integrated Search
2001-11-04
A novel approach to driving simulation is described, one that potentially overcomes the limitations of both motion fidelity and cost. It has become feasible only because of recent advances in computer-based image generation speed and fidelity and in ...
Grady, Janet L; Kehrer, Rosemary G; Trusty, Carole E; Entin, Eileen B; Entin, Elliot E; Brunye, Tad T
2008-09-01
Simulation technologies are gaining widespread acceptance across a variety of educational domains and applications. The current research examines whether basic nursing procedure training with high-fidelity versus low-fidelity mannequins results in differential skill acquisition and perceptions of simulator utility. Fifty-two first-year students were taught nasogastric tube and indwelling urinary catheter insertion in one of two ways. The first group learned nasogastric tube and urinary catheter insertion using high-fidelity and low-fidelity mannequins, respectively, and the second group learned nasogastric tube and urinary catheter insertion using low-fidelity and high-fidelity mannequins, respectively. The dependent measures included student performance on nasogastric tube and urinary catheter insertion testing, as measured by observer-based instruments, and self-report questionnaires probing student attitudes about the use of simulation in nursing education. Results demonstrated higher performance with high-fidelity than with low-fidelity mannequin training. In response to a self-report posttraining questionnaire, participants expressed a more positive attitude toward the high-fidelity mannequin, especially regarding its responsiveness and realism.
Evaluating display fidelity and interaction fidelity in a virtual reality game.
McMahan, Ryan P; Bowman, Doug A; Zielinski, David J; Brady, Rachael B
2012-04-01
In recent years, consumers have witnessed a technological revolution that has delivered more-realistic experiences in their own homes through high-definition, stereoscopic televisions and natural, gesture-based video game consoles. Although these experiences are more realistic, offering higher levels of fidelity, it is not clear how the increased display and interaction aspects of fidelity impact the user experience. Since immersive virtual reality (VR) allows us to achieve very high levels of fidelity, we designed and conducted a study that used a six-sided CAVE to evaluate display fidelity and interaction fidelity independently, at extremely high and low levels, for a VR first-person shooter (FPS) game. Our goal was to gain a better understanding of the effects of fidelity on the user in a complex, performance-intensive context. The results of our study indicate that both display and interaction fidelity significantly affect strategy and performance, as well as subjective judgments of presence, engagement, and usability. In particular, performance results were strongly in favor of two conditions: low-display, low-interaction fidelity (representative of traditional FPS games) and high-display, high-interaction fidelity (similar to the real world).
Computer Simulation Performed for Columbia Project Cooling System
NASA Technical Reports Server (NTRS)
Ahmad, Jasim
2005-01-01
This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff
Compressed Sensing Quantum Process Tomography for Superconducting Quantum Gates
NASA Astrophysics Data System (ADS)
Rodionov, Andrey
An important challenge in quantum information science and quantum computing is the experimental realization of high-fidelity quantum operations on multi-qubit systems. Quantum process tomography (QPT) is a procedure devised to fully characterize a quantum operation. We first present the results of the estimation of the process matrix for superconducting multi-qubit quantum gates using the full data set employing various methods: linear inversion, maximum likelihood, and least-squares. To alleviate the problem of exponential resource scaling needed to characterize a multi-qubit system, we next investigate a compressed sensing (CS) method for QPT of two-qubit and three-qubit quantum gates. Using experimental data for two-qubit controlled-Z gates, taken with both Xmon and superconducting phase qubits, we obtain estimates for the process matrices with reasonably high fidelities compared to full QPT, despite using significantly reduced sets of initial states and measurement configurations. We show that the CS method still works when the amount of data is so small that the standard QPT would have an underdetermined system of equations. We also apply the CS method to the analysis of the three-qubit Toffoli gate with simulated noise, and similarly show that the method works well for a substantially reduced set of data. For the CS calculations we use two different bases in which the process matrix is approximately sparse (the Pauli-error basis and the singular value decomposition basis), and show that the resulting estimates of the process matrices match with reasonably high fidelity. For both two-qubit and three-qubit gates, we characterize the quantum process by its process matrix and average state fidelity, as well as by the corresponding standard deviation defined via the variation of the state fidelity for different initial states. We calculate the standard deviation of the average state fidelity both analytically and numerically, using a Monte Carlo method. Overall, we show that CS QPT offers a significant reduction in the needed amount of experimental data for two-qubit and three-qubit quantum gates.
NASA Technical Reports Server (NTRS)
Turner, Mark G.; Reed, John A.; Ryder, Robert; Veres, Joseph P.
2004-01-01
A Zero-D cycle simulation of the GE90-94B high bypass turbofan engine has been achieved utilizing mini-maps generated from a high-fidelity simulation. The simulation utilizes the Numerical Propulsion System Simulation (NPSS) thermodynamic cycle modeling system coupled to a high-fidelity full-engine model represented by a set of coupled 3D computational fluid dynamic (CFD) component models. Boundary conditions from the balanced, steady state cycle model are used to define component boundary conditions in the full-engine model. Operating characteristics of the 3D component models are integrated into the cycle model via partial performance maps generated from the CFD flow solutions using one-dimensional mean line turbomachinery programs. This paper highlights the generation of the high-pressure compressor, booster, and fan partial performance maps, as well as turbine maps for the high pressure and low pressure turbine. These are actually "mini-maps" in the sense that they are developed only for a narrow operating range of the component. Results are compared between actual cycle data at a take-off condition and the comparable condition utilizing these mini-maps. The mini-maps are also presented with comparison to actual component data where possible.
Finite element analysis of constrained total Condylar Knee Prosthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-07-13
Exactech, Inc., is a prosthetic joint manufacturer based in Gainesville, FL. The company set the goal of developing a highly effective prosthetic articulation, based on scientific principles, not trial and error. They developed an evolutionary design for a total knee arthroplasty system that promised improved performance. They performed static load tests in the laboratory with similar previous designs, but dynamic laboratory testing was both difficult to perform and prohibitively expensive for a small business to undertake. Laboratory testing also cannot measure stress levels in the interior of the prosthesis where failures are known to initiate. To fully optimize their designsmore » for knee arthroplasty revisions, they needed range-of-motion stress/strain data at interior as well as exterior locations within the prosthesis. LLNL developed computer software (especially NIKE3D) specifically designed to perform stress/strain computations (finite element analysis) for complex geometries in large displacement/large deformation conditions. Additionally, LLNL had developed a high fidelity knee model for other analytical purposes. The analysis desired by Exactech could readily be performed using NIKE3D and a modified version of the high fidelity knee that contained the geometry of the condylar knee components. The LLNL high fidelity knee model was a finite element computer model which would not be transferred to Exactech during the course of this CRADA effort. The previously performed laboratory studies by Exactech were beneficial to LLNL in verifying the analytical capabilities of NIKE3D for human anatomical modeling. This, in turn, gave LLNL further entree to perform work-for-others in the prosthetics field. There were two purposes to the CRADA (1) To modify the LLNL High Fidelity Knee Model to accept the geometry of the Exactech Total Knee; and (2) To perform parametric studies of the possible design options in appropriate ranges of motion so that an optimum design could be selected for production. Because of unanticipated delays in the CRADA funding, the knee design had to be finalized before the analysis could be accomplished. Thus, the scope of work was modified by the industrial partner. It was decided that it would be most beneficial to perform FEA that would closely replicate the lab tests that had been done as the basis of the design. Exactech was responsible for transmitting the component geometries to Livermore, as well as providing complete data from the quasi-static laboratory loading tests that were performed on various designs. LLNL was responsible for defining the basic finite element mesh and carrying out the analysis. We performed the initial computer simulation and verified model integrity, using the laboratory data. After performing the parametric studies, the results were reviewed with Exactech. Also, the results were presented at the Orthopedic Research Society meeting in a poster session.« less
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
NASA Astrophysics Data System (ADS)
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of fitness-driven retention. This strategy capitalizes on the advantages of evolutionary algorithm as well as POD-based reduced order modeling, while overcoming the shortcomings inherent with these techniques. When linked with M3 DOE, this strategy offers a computationally efficient methodology for problems with high level of complexity and a challenging design-space. This newly developed framework is demonstrated for its robustness on a nonconventional supersonic tailless air vehicle wing shape optimization problem.
Shuaib, Aban; Hartwell, Adam; Kiss-Toth, Endre; Holcombe, Mike
2016-01-01
Signal transduction through the Mitogen Activated Protein Kinase (MAPK) pathways is evolutionarily highly conserved. Many cells use these pathways to interpret changes to their environment and respond accordingly. The pathways are central to triggering diverse cellular responses such as survival, apoptosis, differentiation and proliferation. Though the interactions between the different MAPK pathways are complex, nevertheless, they maintain a high level of fidelity and specificity to the original signal. There are numerous theories explaining how fidelity and specificity arise within this complex context; spatio-temporal regulation of the pathways and feedback loops are thought to be very important. This paper presents an agent based computational model addressing multi-compartmentalisation and how this influences the dynamics of MAPK cascade activation. The model suggests that multi-compartmentalisation coupled with periodic MAPK kinase (MAPKK) activation may be critical factors for the emergence of oscillation and ultrasensitivity in the system. Finally, the model also establishes a link between the spatial arrangements of the cascade components and temporal activation mechanisms, and how both contribute to fidelity and specificity of MAPK mediated signalling. PMID:27243235
Efficient experimental design of high-fidelity three-qubit quantum gates via genetic programming
NASA Astrophysics Data System (ADS)
Devra, Amit; Prabhu, Prithviraj; Singh, Harpreet; Arvind; Dorai, Kavita
2018-03-01
We have designed efficient quantum circuits for the three-qubit Toffoli (controlled-controlled-NOT) and the Fredkin (controlled-SWAP) gate, optimized via genetic programming methods. The gates thus obtained were experimentally implemented on a three-qubit NMR quantum information processor, with a high fidelity. Toffoli and Fredkin gates in conjunction with the single-qubit Hadamard gates form a universal gate set for quantum computing and are an essential component of several quantum algorithms. Genetic algorithms are stochastic search algorithms based on the logic of natural selection and biological genetics and have been widely used for quantum information processing applications. We devised a new selection mechanism within the genetic algorithm framework to select individuals from a population. We call this mechanism the "Luck-Choose" mechanism and were able to achieve faster convergence to a solution using this mechanism, as compared to existing selection mechanisms. The optimization was performed under the constraint that the experimentally implemented pulses are of short duration and can be implemented with high fidelity. We demonstrate the advantage of our pulse sequences by comparing our results with existing experimental schemes and other numerical optimization methods.
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hori, T.; Hirahara, K.; Hashimoto, C.; Hori, M.
2016-12-01
Estimation of the coseismic/postseismic slip using postseismic deformation observation data is an important topic in the field of geodetic inversion. Estimation methods for this purpose are expected to be improved by introducing numerical simulation tools (e.g. finite element (FE) method) of viscoelastic deformation, in which the computation model is of high fidelity to the available high-resolution crustal data. The authors have proposed a large-scale simulation method using such FE high-fidelity models (HFM), assuming use of a large-scale computation environment such as the K computer in Japan (Ichimura et al. 2016). On the other hand, the values of viscosity in the heterogeneous viscoelastic structure in the high-fidelity model are not trivial. In this study, we developed an adjoint-based optimization method incorporating HFM, in which fault slip and asthenosphere viscosity are simultaneously estimated. We carried out numerical experiments using synthetic crustal deformation data. We constructed an HFM in the domain of 2048x1536x850 km, which includes the Tohoku region in northeast Japan based on Ichimura et al. (2013). We used the model geometry data set of JTOPO30 (2003), Koketsu et al. (2008) and CAMP standard model (Hashimoto et al. 2004). The geometry of crustal structures in HFM is in 1km resolution, resulting in 36 billion degrees-of-freedom. Synthetic crustal deformation data due to prescribed coseismic slip and after slips in the location of GEONET, GPS/A observation points, and S-net are used. The target inverse analysis is formulated as minimization of L2 norm of the difference between the FE simulation results and the observation data with respect to viscosity and fault slip, combining the quasi-Newton algorithm with the adjoint method. Use of this combination decreases the necessary number of forward analyses in the optimization calculation. As a result, we are now able to finish the estimation using 2560 computer nodes of the K computer for less than 17 hours. Thus, the target inverse analysis is completed in a realistic time because of the combination of the fast solver and the adjoint method. In the future, we would like to apply the method to the actual data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aly, A.; Avramova, Maria; Ivanov, Kostadin
To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less
Analysis of a Hovering Rotor in Icing Conditions
NASA Technical Reports Server (NTRS)
Narducci, Robert; Kreeger, Richard E.
2012-01-01
A high fidelity analysis method is proposed to evaluate the ice accumulation and the ensuing rotor performance degradation for a helicopter flying through an icing cloud. The process uses computational fluid dynamics (CFD) coupled to a rotorcraft comprehensive code to establish the aerodynamic environment of a trimmed rotor prior to icing. Based on local aerodynamic conditions along the rotor span and accounting for the azimuthal variation, an ice accumulation analysis using NASA's Lewice3D code is made to establish the ice geometry. Degraded rotor performance is quantified by repeating the high fidelity rotor analysis with updates which account for ice shape and mass. The process is applied on a full-scale UH-1H helicopter in hover using data recorded during the Helicopter Icing Flight Test Program.
Waste IPSC : Thermal-Hydrologic-Chemical-Mechanical (THCM) modeling and simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeze, Geoffrey A.; Wang, Yifeng; Arguello, Jose Guadalupe, Jr.
2010-10-01
Waste IPSC Objective is to develop an integrated suite of high performance computing capabilities to simulate radionuclide movement through the engineered components and geosphere of a radioactive waste storage or disposal system: (1) with robust thermal-hydrologic-chemical-mechanical (THCM) coupling; (2) for a range of disposal system alternatives (concepts, waste form types, engineered designs, geologic settings); (3) for long time scales and associated large uncertainties; (4) at multiple model fidelities (sub-continuum, high-fidelity continuum, PA); and (5) in accordance with V&V and software quality requirements. THCM Modeling collaborates with: (1) Other Waste IPSC activities: Sub-Continuum Processes (and FMM), Frameworks and Infrastructure (and VU,more » ECT, and CT); (2) Waste Form Campaign; (3) Used Fuel Disposition (UFD) Campaign; and (4) ASCEM.« less
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
Drach, Andrew; Khalighi, Amir H; Sacks, Michael S
2018-02-01
Multiple studies have demonstrated that the pathological geometries unique to each patient can affect the durability of mitral valve (MV) repairs. While computational modeling of the MV is a promising approach to improve the surgical outcomes, the complex MV geometry precludes use of simplified models. Moreover, the lack of complete in vivo geometric information presents significant challenges in the development of patient-specific computational models. There is thus a need to determine the level of detail necessary for predictive MV models. To address this issue, we have developed a novel pipeline for building attribute-rich computational models of MV with varying fidelity directly from the in vitro imaging data. The approach combines high-resolution geometric information from loaded and unloaded states to achieve a high level of anatomic detail, followed by mapping and parametric embedding of tissue attributes to build a high-resolution, attribute-rich computational models. Subsequent lower resolution models were then developed and evaluated by comparing the displacements and surface strains to those extracted from the imaging data. We then identified the critical levels of fidelity for building predictive MV models in the dilated and repaired states. We demonstrated that a model with a feature size of about 5 mm and mesh size of about 1 mm was sufficient to predict the overall MV shape, stress, and strain distributions with high accuracy. However, we also noted that more detailed models were found to be needed to simulate microstructural events. We conclude that the developed pipeline enables sufficiently complex models for biomechanical simulations of MV in normal, dilated, repaired states. Copyright © 2017 John Wiley & Sons, Ltd.
Aerodynamic optimization of aircraft wings using a coupled VLM-2.5D RANS approach
NASA Astrophysics Data System (ADS)
Parenteau, Matthieu
The design process of transonic civil aircraft is complex and requires strong governance to manage the various program development phases. There is a need in the community to have numerical models in all disciplines that span the conceptual, preliminary and detail design phases in a seamless fashion so that choices made in each phase remain consistent with each other. The objective of this work is to develop an aerodynamic model suitable for conceptual multidisciplinary design optimization with low computational cost and sufficient fidelity to explore a large design space in the transonic and high-lift regimes. The physics-based reduce order model is based on the inviscid Vortex Lattice Method (VLM), selected for its low computation time. Viscous effects are modeled with two-dimensional high-fidelity RANS calculations at various sections along the span and incorporated as an angle of attack correction inside the VLM. The viscous sectional data are calculated with infinite swept wing conditions to allow viscous crossflow effects to be included for a more accurate maximum lift coefficient and spanload evaluations. These viscous corrections are coupled through a modified alpha coupling method for 2.5D RANS sectional data, stabilized in the post-stall region with artificial dissipation. The fidelity of the method is verified against 3D RANS flow solver solutions on the Bombardier Research Wing (BRW). Clean and high-lift configurations are investigated. The overall results show impressive precision of the VLM/2.5D RANS approach compared to 3D RANS solutions and in compute times in the order of seconds on a standard desktop computer. Finally, the aerodynamic solver is implemented in an optimization framework with a Covariant Matrix Adaptation Evolution Strategy (CMA-ES) optimizer to explore the design space of aerodynamic wing planform. Single-objective low-speed and high-speed optimizations are performed along with composite-objective functions for combined low-speed and high-speed optimizations with high-lift configurations as well. Moreover, the VLM/2.5D approach is capable of capturing stall cells phenomena and this characteristic is used to define a new spanwise stall criteria to be introduced as an optimization constraint. The work concludes on the limitations of the method and possible avenues for further research. None
Parallel computing in enterprise modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.
2008-08-01
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less
Gonzalez-Cota, Alan; Chiravuri, Srinivas; Stansfield, R Brent; Brummett, Chad M; Hamstra, Stanley J
2013-01-01
The purpose of this study was to determine whether high-fidelity simulators provide greater benefit than low-fidelity models in training fluoroscopy-guided transforaminal epidural injection. This educational study was a single-center, prospective, randomized 3-arm pretest-posttest design with a control arm. Eighteen anesthesia and physical medicine and rehabilitation residents were instructed how to perform a fluoroscopy-guided transforaminal epidural injection and assessed by experts on a reusable injectable phantom cadaver. The high- and low-fidelity groups received 30 minutes of supervised hands-on practice according to group assignment, and the control group received 30 minutes of didactic instruction from an expert. We found no differences at posttest between the high- and low-fidelity groups on global ratings of performance (P = 0.17) or checklist scores (P = 0.81). Participants who received either form of hands-on training significantly outperformed the control group on both the global rating of performance (control vs low-fidelity, P = 0.0048; control vs high-fidelity, P = 0.0047) and the checklist (control vs low-fidelity, P = 0.0047; control vs high-fidelity, P = 0.0047). Training an epidural procedure using a low-fidelity model may be equally effective as training on a high-fidelity model. These results are consistent with previous research on a variety of interventional procedures and further demonstrate the potential impact of simple, low-fidelity training models.
Validation of High-Fidelity CFD Simulations for Rocket Injector Design
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Menon, Suresh; Merkle, Charles L.; Oefelein, Joseph C.; Yang, Vigor
2008-01-01
Computational fluid dynamics (CFD) has the potential to improve the historical rocket injector design process by evaluating the sensitivity of performance and injector-driven thermal environments to the details of the injector geometry and key operational parameters. Methodical verification and validation efforts on a range of coaxial injector elements have shown the current production CFD capability must be improved in order to quantitatively impact the injector design process. This paper documents the status of a focused effort to compare and understand the predictive capabilities and computational requirements of a range of CFD methodologies on a set of single element injector model problems. The steady Reynolds-Average Navier-Stokes (RANS), unsteady Reynolds-Average Navier-Stokes (URANS) and three different approaches using the Large Eddy Simulation (LES) technique were used to simulate the initial model problem, a single element coaxial injector using gaseous oxygen and gaseous hydrogen propellants. While one high-fidelity LES result matches the experimental combustion chamber wall heat flux very well, there is no monotonic convergence to the data with increasing computational tool fidelity. Systematic evaluation of key flow field regions such as the flame zone, the head end recirculation zone and the downstream near wall zone has shed significant, though as of yet incomplete, light on the complex, underlying causes for the performance level of each technique. 1 Aerospace Engineer and Combustion CFD Team Leader, MS ER42, NASA MSFC, AL 35812, Senior Member, AIAA. 2 Professor and Director, Computational Combustion Laboratory, School of Aerospace Engineering, 270 Ferst Dr., Atlanta, GA 30332, Associate Fellow, AIAA. 3 Reilly Professor of Engineering, School of Mechanical Engineering, 585 Purdue Mall, West Lafayette, IN 47907, Fellow, AIAA. 4 Principal Member of Technical Staff, Combustion Research Facility, 7011 East Avenue, MS9051, Livermore, CA 94550, Associate Fellow, AIAA. 5 J. L. and G. H. McCain Endowed Chair, Mechanical Engineering, 104 Research Building East, University Park, PA 16802, Fellow, AIAA. American Institute of Aeronautics and Astronautics 1
High Performance Computing for Modeling Wind Farms and Their Impact
NASA Astrophysics Data System (ADS)
Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.
2016-12-01
As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.
Argonne News Brief: Making Sense of Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The Argonne Leadership Computing Facility at Argonne National Laboratory helped Joe Nichols, of the University of Minnesota, to create high fidelity simulations of jet turbulence to determine how and where noise is produced. The results may lead to novel engineering designs that reduce noise over commercial flight paths and on aircraft carrier decks.
ERIC Educational Resources Information Center
Shegog, Ross; Lazarus, Melanie M.; Murray, Nancy G.; Diamond, Pamela M.; Sessions, Nathalie; Zsigmond, Eva
2012-01-01
The transgenic mouse model is useful for studying the causes and potential cures for human genetic diseases. Exposing high school biology students to laboratory experience in developing transgenic animal models is logistically prohibitive. Computer-based simulation, however, offers this potential in addition to advantages of fidelity and reach.…
The Impact of Different Sources of Fluctuations on Mutual Information in Biochemical Networks
Chevalier, Michael; Venturelli, Ophelia; El-Samad, Hana
2015-01-01
Stochastic fluctuations in signaling and gene expression limit the ability of cells to sense the state of their environment, transfer this information along cellular pathways, and respond to it with high precision. Mutual information is now often used to quantify the fidelity with which information is transmitted along a cellular pathway. Mutual information calculations from experimental data have mostly generated low values, suggesting that cells might have relatively low signal transmission fidelity. In this work, we demonstrate that mutual information calculations might be artificially lowered by cell-to-cell variability in both initial conditions and slowly fluctuating global factors across the population. We carry out our analysis computationally using a simple signaling pathway and demonstrate that in the presence of slow global fluctuations, every cell might have its own high information transmission capacity but that population averaging underestimates this value. We also construct a simple synthetic transcriptional network and demonstrate using experimental measurements coupled to computational modeling that its operation is dominated by slow global variability, and hence that its mutual information is underestimated by a population averaged calculation. PMID:26484538
Complete tomography of a high-fidelity solid-state entangled spin-photon qubit pair.
De Greve, Kristiaan; McMahon, Peter L; Yu, Leo; Pelc, Jason S; Jones, Cody; Natarajan, Chandra M; Kim, Na Young; Abe, Eisuke; Maier, Sebastian; Schneider, Christian; Kamp, Martin; Höfling, Sven; Hadfield, Robert H; Forchel, Alfred; Fejer, M M; Yamamoto, Yoshihisa
2013-01-01
Entanglement between stationary quantum memories and photonic qubits is crucial for future quantum communication networks. Although high-fidelity spin-photon entanglement was demonstrated in well-isolated atomic and ionic systems, in the solid-state, where massively parallel, scalable networks are most realistically conceivable, entanglement fidelities are typically limited due to intrinsic environmental interactions. Distilling high-fidelity entangled pairs from lower-fidelity precursors can act as a remedy, but the required overhead scales unfavourably with the initial entanglement fidelity. With spin-photon entanglement as a crucial building block for entangling quantum network nodes, obtaining high-fidelity entangled pairs becomes imperative for practical realization of such networks. Here we report the first results of complete state tomography of a solid-state spin-photon-polarization-entangled qubit pair, using a single electron-charged indium arsenide quantum dot. We demonstrate record-high fidelity in the solid-state of well over 90%, and the first (99.9%-confidence) achievement of a fidelity that will unambiguously allow for entanglement distribution in solid-state quantum repeater networks.
NASA Astrophysics Data System (ADS)
Shen, Shuwei; Zhao, Zuhua; Wang, Haili; Han, Yilin; Dong, Erbao; Liu, Bin; Liu, Wendong; Cromeens, Barrett; Adler, Brent; Besner, Gail; Ray, William; Hoehne, Brad; Xu, Ronald
2016-03-01
Appropriate surgical planning is important for improved clinical outcome and minimal complications in many surgical operations, such as a conjoined twin separation surgery. We combine 3D printing with casting and assembling to produce a solid phantom of high fidelity to help surgeons for better preparation of the conjoined twin separation surgery. 3D computer models of individual organs were reconstructed based on CT scanned data of the conjoined twins. The models were sliced, processed, and converted to an appropriate format for Fused Deposition Modeling (FDM). The skeletons of the phantom were printed directly by FDM using Acrylonitrile-Butadiene-Styrene (ABS) material, while internal soft organs were fabricated by casting silicon materials of different compositions in FDM printed molds. The skeleton and the internal organs were then assembled with appropriate fixtures to maintain their relative positional accuracies. The assembly was placed in a FMD printed shell mold of the patient body for further casting. For clear differentiation of different internal organs, CT contrast agents of different compositions were added in the silicon cast materials. The produced phantom was scanned by CT again and compared with that of the original computer models of the conjoined twins in order to verify the structural and positional fidelity. Our preliminary experiments showed that combining 3D printing with casting is an effective way to produce solid phantoms of high fidelity for the improved surgical planning in many clinical applications.
Neurobionics and the brain-computer interface: current applications and future horizons.
Rosenfeld, Jeffrey V; Wong, Yan Tat
2017-05-01
The brain-computer interface (BCI) is an exciting advance in neuroscience and engineering. In a motor BCI, electrical recordings from the motor cortex of paralysed humans are decoded by a computer and used to drive robotic arms or to restore movement in a paralysed hand by stimulating the muscles in the forearm. Simultaneously integrating a BCI with the sensory cortex will further enhance dexterity and fine control. BCIs are also being developed to: provide ambulation for paraplegic patients through controlling robotic exoskeletons; restore vision in people with acquired blindness; detect and control epileptic seizures; and improve control of movement disorders and memory enhancement. High-fidelity connectivity with small groups of neurons requires microelectrode placement in the cerebral cortex. Electrodes placed on the cortical surface are less invasive but produce inferior fidelity. Scalp surface recording using electroencephalography is much less precise. BCI technology is still in an early phase of development and awaits further technical improvements and larger multicentre clinical trials before wider clinical application and impact on the care of people with disabilities. There are also many ethical challenges to explore as this technology evolves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, William D; Johansen, Hans; Evans, Katherine J
We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Minimally complex ion traps as modules for quantum communication and computing
NASA Astrophysics Data System (ADS)
Nigmatullin, Ramil; Ballance, Christopher J.; de Beaudrap, Niel; Benjamin, Simon C.
2016-10-01
Optically linked ion traps are promising as components of network-based quantum technologies, including communication systems and modular computers. Experimental results achieved to date indicate that the fidelity of operations within each ion trap module will be far higher than the fidelity of operations involving the links; fortunately internal storage and processing can effectively upgrade the links through the process of purification. Here we perform the most detailed analysis to date on this purification task, using a protocol which is balanced to maximise fidelity while minimising the device complexity and the time cost of the process. Moreover we ‘compile down’ the quantum circuit to device-level operations including cooling and shuttling events. We find that a linear trap with only five ions (two of one species, three of another) can support our protocol while incorporating desirable features such as global control, i.e. laser control pulses need only target an entire zone rather than differentiating one ion from its neighbour. To evaluate the capabilities of such a module we consider its use both as a universal communications node for quantum key distribution, and as the basic repeating unit of a quantum computer. For the latter case we evaluate the threshold for fault tolerant quantum computing using the surface code, finding acceptable fidelities for the ‘raw’ entangling link as low as 83% (or under 75% if an additional ion is available).
Design Considerations of a Virtual Laboratory for Advanced X-ray Sources
NASA Astrophysics Data System (ADS)
Luginsland, J. W.; Frese, M. H.; Frese, S. D.; Watrous, J. J.; Heileman, G. L.
2004-11-01
The field of scientific computation has greatly advanced in the last few years, resulting in the ability to perform complex computer simulations that can predict the performance of real-world experiments in a number of fields of study. Among the forces driving this new computational capability is the advent of parallel algorithms, allowing calculations in three-dimensional space with realistic time scales. Electromagnetic radiation sources driven by high-voltage, high-current electron beams offer an area to further push the state-of-the-art in high fidelity, first-principles simulation tools. The physics of these x-ray sources combine kinetic plasma physics (electron beams) with dense fluid-like plasma physics (anode plasmas) and x-ray generation (bremsstrahlung). There are a number of mature techniques and software packages for dealing with the individual aspects of these sources, such as Particle-In-Cell (PIC), Magneto-Hydrodynamics (MHD), and radiation transport codes. The current effort is focused on developing an object-oriented software environment using the Rational© Unified Process and the Unified Modeling Language (UML) to provide a framework where multiple 3D parallel physics packages, such as a PIC code (ICEPIC), a MHD code (MACH), and a x-ray transport code (ITS) can co-exist in a system-of-systems approach to modeling advanced x-ray sources. Initial software design and assessments of the various physics algorithms' fidelity will be presented.
Validation of the AVM Blast Computational Modeling and Simulation Tool Set
2015-08-04
by-construction" methodology is powerful and would not be possible without high -level design languages to support validation and verification. [1,4...to enable the making of informed design decisions. Enable rapid exploration of the design trade-space for high -fidelity requirements tradeoffs...live-fire tests, the jump height of the target structure is recorded by using either high speed cameras or a string pot. A simple projectile motion
NASA Astrophysics Data System (ADS)
Kenway, Gaetan K. W.
This thesis presents new tools and techniques developed to address the challenging problem of high-fidelity aerostructural optimization with respect to large numbers of design variables. A new mesh-movement scheme is developed that is both computationally efficient and sufficiently robust to accommodate large geometric design changes and aerostructural deformations. A fully coupled Newton-Krylov method is presented that accelerates the convergence of aerostructural systems and provides a 20% performance improvement over the traditional nonlinear block Gauss-Seidel approach and can handle more exible structures. A coupled adjoint method is used that efficiently computes derivatives for a gradient-based optimization algorithm. The implementation uses only machine accurate derivative techniques and is verified to yield fully consistent derivatives by comparing against the complex step method. The fully-coupled large-scale coupled adjoint solution method is shown to have 30% better performance than the segregated approach. The parallel scalability of the coupled adjoint technique is demonstrated on an Euler Computational Fluid Dynamics (CFD) model with more than 80 million state variables coupled to a detailed structural finite-element model of the wing with more than 1 million degrees of freedom. Multi-point high-fidelity aerostructural optimizations of a long-range wide-body, transonic transport aircraft configuration are performed using the developed techniques. The aerostructural analysis employs Euler CFD with a 2 million cell mesh and a structural finite element model with 300 000 DOF. Two design optimization problems are solved: one where takeoff gross weight is minimized, and another where fuel burn is minimized. Each optimization uses a multi-point formulation with 5 cruise conditions and 2 maneuver conditions. The optimization problems have 476 design variables are optimal results are obtained within 36 hours of wall time using 435 processors. The TOGW minimization results in a 4.2% reduction in TOGW with a 6.6% fuel burn reduction, while the fuel burn optimization resulted in a 11.2% fuel burn reduction with no change to the takeoff gross weight.
High-speed quantum networking by ship
NASA Astrophysics Data System (ADS)
Devitt, Simon J.; Greentree, Andrew D.; Stephens, Ashley M.; van Meter, Rodney
2016-11-01
Networked entanglement is an essential component for a plethora of quantum computation and communication protocols. Direct transmission of quantum signals over long distances is prevented by fibre attenuation and the no-cloning theorem, motivating the development of quantum repeaters, designed to purify entanglement, extending its range. Quantum repeaters have been demonstrated over short distances, but error-corrected, global repeater networks with high bandwidth require new technology. Here we show that error corrected quantum memories installed in cargo containers and carried by ship can provide a exible connection between local networks, enabling low-latency, high-fidelity quantum communication across global distances at higher bandwidths than previously proposed. With demonstrations of technology with sufficient fidelity to enable topological error-correction, implementation of the quantum memories is within reach, and bandwidth increases with improvements in fabrication. Our approach to quantum networking avoids technological restrictions of repeater deployment, providing an alternate path to a worldwide Quantum Internet.
High-speed quantum networking by ship
Devitt, Simon J.; Greentree, Andrew D.; Stephens, Ashley M.; Van Meter, Rodney
2016-01-01
Networked entanglement is an essential component for a plethora of quantum computation and communication protocols. Direct transmission of quantum signals over long distances is prevented by fibre attenuation and the no-cloning theorem, motivating the development of quantum repeaters, designed to purify entanglement, extending its range. Quantum repeaters have been demonstrated over short distances, but error-corrected, global repeater networks with high bandwidth require new technology. Here we show that error corrected quantum memories installed in cargo containers and carried by ship can provide a exible connection between local networks, enabling low-latency, high-fidelity quantum communication across global distances at higher bandwidths than previously proposed. With demonstrations of technology with sufficient fidelity to enable topological error-correction, implementation of the quantum memories is within reach, and bandwidth increases with improvements in fabrication. Our approach to quantum networking avoids technological restrictions of repeater deployment, providing an alternate path to a worldwide Quantum Internet. PMID:27805001
Robust distant-entanglement generation using coherent multiphoton scattering
NASA Astrophysics Data System (ADS)
Chan, Ching-Kit; Sham, L. J.
2013-03-01
The generation and controllability of entanglement between distant quantum states have been the heart of quantum computation and quantum information processing. Existing schemes for solid state qubit entanglement are based on the single-photon spectroscopy that has the merit of a high fidelity entanglement creation, but with a very limited efficiency. This severely restricts the scalability for a qubit network system. Here, we describe a new distant entanglement protocol using coherent multiphoton scattering. The scheme makes use of the postselection of large and distinguishable photon signals, and has both a high success probability and a high entanglement fidelity. Our result shows that the entanglement generation is robust against photon fluctuations, and has an average entanglement duration within the decoherence time in various qubit systems, based on existing experimental parameters. This research was supported by the U.S. Army Research Office MURI award W911NF0910406 and by NSF grant PHY-1104446.
NASA Technical Reports Server (NTRS)
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
High-speed quantum networking by ship.
Devitt, Simon J; Greentree, Andrew D; Stephens, Ashley M; Van Meter, Rodney
2016-11-02
Networked entanglement is an essential component for a plethora of quantum computation and communication protocols. Direct transmission of quantum signals over long distances is prevented by fibre attenuation and the no-cloning theorem, motivating the development of quantum repeaters, designed to purify entanglement, extending its range. Quantum repeaters have been demonstrated over short distances, but error-corrected, global repeater networks with high bandwidth require new technology. Here we show that error corrected quantum memories installed in cargo containers and carried by ship can provide a exible connection between local networks, enabling low-latency, high-fidelity quantum communication across global distances at higher bandwidths than previously proposed. With demonstrations of technology with sufficient fidelity to enable topological error-correction, implementation of the quantum memories is within reach, and bandwidth increases with improvements in fabrication. Our approach to quantum networking avoids technological restrictions of repeater deployment, providing an alternate path to a worldwide Quantum Internet.
High-fidelity plasma codes for burn physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooley, James; Graziani, Frank; Marinak, Marty
Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental datamore » and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quon, Eliot; Platt, Andrew; Yu, Yi-Hsiang
Extreme loads are often a key cost driver for wave energy converters (WECs). As an alternative to exhaustive Monte Carlo or long-term simulations, the most likely extreme response (MLER) method allows mid- and high-fidelity simulations to be used more efficiently in evaluating WEC response to events at the edges of the design envelope, and is therefore applicable to system design analysis. The study discussed in this paper applies the MLER method to investigate the maximum heave, pitch, and surge force of a point absorber WEC. Most likely extreme waves were obtained from a set of wave statistics data based onmore » spectral analysis and the response amplitude operators (RAOs) of the floating body; the RAOs were computed from a simple radiation-and-diffraction-theory-based numerical model. A weakly nonlinear numerical method and a computational fluid dynamics (CFD) method were then applied to compute the short-term response to the MLER wave. Effects of nonlinear wave and floating body interaction on the WEC under the anticipated 100-year waves were examined by comparing the results from the linearly superimposed RAOs, the weakly nonlinear model, and CFD simulations. Overall, the MLER method was successfully applied. In particular, when coupled to a high-fidelity CFD analysis, the nonlinear fluid dynamics can be readily captured.« less
Evaluation of Airframe Noise Reduction Concepts via Simulations Using a Lattice Boltzmann Approach
NASA Technical Reports Server (NTRS)
Fares, Ehab; Casalino, Damiano; Khorrami, Mehdi R.
2015-01-01
Unsteady computations are presented for a high-fidelity, 18% scale, semi-span Gulfstream aircraft model in landing configuration, i.e. flap deflected at 39 degree and main landing gear deployed. The simulations employ the lattice Boltzmann solver PowerFLOW® to simultaneously capture the flow physics and acoustics in the near field. Sound propagation to the far field is obtained using a Ffowcs Williams and Hawkings acoustic analogy approach. In addition to the baseline geometry, which was presented previously, various noise reduction concepts for the flap and main landing gear are simulated. In particular, care is taken to fully resolve the complex geometrical details associated with these concepts in order to capture the resulting intricate local flow field thus enabling accurate prediction of their acoustic behavior. To determine aeroacoustic performance, the farfield noise predicted with the concepts applied is compared to high-fidelity simulations of the untreated baseline configurations. To assess the accuracy of the computed results, the aerodynamic and aeroacoustic impact of the noise reduction concepts is evaluated numerically and compared to experimental results for the same model. The trends and effectiveness of the simulated noise reduction concepts compare well with measured values and demonstrate that the computational approach is capable of capturing the primary effects of the acoustic treatment on a full aircraft model.
NASA Astrophysics Data System (ADS)
Sinsbeck, Michael; Tartakovsky, Daniel
2015-04-01
Infiltration into top soil can be described by alternative models with different degrees of fidelity: Richards equation and the Green-Ampt model. These models typically contain uncertain parameters and forcings, rendering predictions of the state variables uncertain as well. Within the probabilistic framework, solutions of these models are given in terms of their probability density functions (PDFs) that, in the presence of data, can be treated as prior distributions. The assimilation of soil moisture data into model predictions, e.g., via a Bayesian updating of solution PDFs, poses a question of model selection: Given a significant difference in computational cost, is a lower-fidelity model preferable to its higher-fidelity counter-part? We investigate this question in the context of heterogeneous porous media, whose hydraulic properties are uncertain. While low-fidelity (reduced-complexity) models introduce a model error, their moderate computational cost makes it possible to generate more realizations, which reduces the (e.g., Monte Carlo) sampling or stochastic error. The ratio between these two errors determines the model with the smallest total error. We found assimilation of measurements of a quantity of interest (the soil moisture content, in our example) to decrease the model error, increasing the probability that the predictive accuracy of a reduced-complexity model does not fall below that of its higher-fidelity counterpart.
Raedeke, Thomas D; Dlugonski, Deirdre
2017-12-01
This study was designed to compare a low versus high theoretical fidelity pedometer intervention applying social-cognitive theory on step counts and self-efficacy. Fifty-six public university employees participated in a 10-week randomized controlled trial with 2 conditions that varied in theoretical fidelity. Participants in the high theoretical fidelity condition wore a pedometer and participated in a weekly group walk followed by a meeting to discuss cognitive-behavioral strategies targeting self-efficacy. Participants in the low theoretical fidelity condition met for a group walk and also used a pedometer as a motivational tool and to monitor steps. Step counts were assessed throughout the 10-week intervention and after a no-treatment follow-up (20 weeks and 30 weeks). Self-efficacy was measured preintervention and postintervention. Participants in the high theoretical fidelity condition increased daily steps by 2,283 from preintervention to postintervention, whereas participants in the low fidelity condition demonstrated minimal change during the same time period (p = .002). Individuals attending at least 80% of the sessions in the high theoretical fidelity condition showed an increase of 3,217 daily steps (d = 1.03), whereas low attenders increased by 925 (d = 0.40). Attendance had minimal impact in the low theoretical fidelity condition. Follow-up data revealed that step counts were at least somewhat maintained. For self-efficacy, participants in the high, compared with those in the low, theoretical fidelity condition showed greater improvements. Findings highlight the importance of basing activity promotion efforts on theory. The high theoretical fidelity intervention that included cognitive-behavioral strategies targeting self-efficacy was more effective than the low theoretical fidelity intervention, especially for those with high attendance.
An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian
For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less
NASA Technical Reports Server (NTRS)
Follen, Gregory; auBuchon, M.
2000-01-01
Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer along with the concept of numerical zooming between zero-dimensional to one-, two-, and three-dimensional component engine codes. In addition, the NPSS is refining the computing and communication technologies necessary to capture complex physical processes in a timely and cost-effective manner. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Of the different technology areas that contribute to the development of the NPSS Environment, the subject of this paper is a discussion on numerical zooming between a NPSS engine simulation and higher fidelity representations of the engine components (fan, compressor, burner, turbines, etc.). What follows is a description of successfully zooming one-dimensional (row-by-row) high-pressure compressor analysis results back to a zero-dimensional NPSS engine simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the capability of the engine system simulation and increase the level of virtual test conducted prior to committing the design to hardware.
Multidisciplinary design and optimization (MDO) methodology for the aircraft conceptual design
NASA Astrophysics Data System (ADS)
Iqbal, Liaquat Ullah
An integrated design and optimization methodology has been developed for the conceptual design of an aircraft. The methodology brings higher fidelity Computer Aided Design, Engineering and Manufacturing (CAD, CAE and CAM) Tools such as CATIA, FLUENT, ANSYS and SURFCAM into the conceptual design by utilizing Excel as the integrator and controller. The approach is demonstrated to integrate with many of the existing low to medium fidelity codes such as the aerodynamic panel code called CMARC and sizing and constraint analysis codes, thus providing the multi-fidelity capabilities to the aircraft designer. The higher fidelity design information from the CAD and CAE tools for the geometry, aerodynamics, structural and environmental performance is provided for the application of the structured design methods such as the Quality Function Deployment (QFD) and the Pugh's Method. The higher fidelity tools bring the quantitative aspects of a design such as precise measurements of weight, volume, surface areas, center of gravity (CG) location, lift over drag ratio, and structural weight, as well as the qualitative aspects such as external geometry definition, internal layout, and coloring scheme early in the design process. The performance and safety risks involved with the new technologies can be reduced by modeling and assessing their impact more accurately on the performance of the aircraft. The methodology also enables the design and evaluation of the novel concepts such as the blended (BWB) and the hybrid wing body (HWB) concepts. Higher fidelity computational fluid dynamics (CFD) and finite element analysis (FEA) allow verification of the claims for the performance gains in aerodynamics and ascertain risks of structural failure due to different pressure distribution in the fuselage as compared with the tube and wing design. The higher fidelity aerodynamics and structural models can lead to better cost estimates that help reduce the financial risks as well. This helps in achieving better designs with reduced risk in lesser time and cost. The approach is shown to eliminate the traditional boundary between the conceptual and the preliminary design stages, combining the two into one consolidated preliminary design phase. Several examples for the validation and utilization of the Multidisciplinary Design and Optimization (MDO) Tool are presented using missions for the Medium and High Altitude Long Range/Endurance Unmanned Aerial Vehicles (UAVs).
Automatic 3D high-fidelity traffic interchange modeling using 2D road GIS data
NASA Astrophysics Data System (ADS)
Wang, Jie; Shen, Yuzhong
2011-03-01
3D road models are widely used in many computer applications such as racing games and driving simulations. However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially for those existing in the real world. Real road network contains various elements such as road segments, road intersections and traffic interchanges. Among them, traffic interchanges present the most challenges to model due to their complexity and the lack of height information (vertical position) of traffic interchanges in existing road GIS data. This paper proposes a novel approach that can automatically produce 3D high-fidelity road network models, including traffic interchange models, from real 2D road GIS data that mainly contain road centerline information. The proposed method consists of several steps. The raw road GIS data are first preprocessed to extract road network topology, merge redundant links, and classify road types. Then overlapped points in the interchanges are detected and their elevations are determined based on a set of level estimation rules. Parametric representations of the road centerlines are then generated through link segmentation and fitting, and they have the advantages of arbitrary levels of detail with reduced memory usage. Finally a set of civil engineering rules for road design (e.g., cross slope, superelevation) are selected and used to generate realistic road surfaces. In addition to traffic interchange modeling, the proposed method also applies to other more general road elements. Preliminary results show that the proposed method is highly effective and useful in many applications.
Petascale supercomputing to accelerate the design of high-temperature alloys
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; ...
2017-10-25
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less
Petascale supercomputing to accelerate the design of high-temperature alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less
Petascale supercomputing to accelerate the design of high-temperature alloys
NASA Astrophysics Data System (ADS)
Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; Haynes, J. Allen
2017-12-01
Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ‧-Al2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviour of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. The approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.
NASA Astrophysics Data System (ADS)
Chiaro, B.; Neill, C.; Chen, Z.; Dunsworth, A.; Foxen, B.; Quintana, C.; Wenner, J.; Martinis, J. M.; Google Quantum Hardware Team
Fast, high fidelity two qubit gates are an essential requirement of a quantum processor. In this talk, we discuss how the tunable coupling of the gmon architecture provides a pathway for an improved two qubit controlled-Z gate. The maximum inter-qubit coupling strength gmax = 60 MHz is sufficient for fast adiabatic two qubit gates to be performed as quickly as single qubit gates, reducing dephasing errors. Additionally, the ability to turn the coupling off allows all qubits to idle at low magnetic flux sensitivity, further reducing susceptibility to noise. However, the flexibility that this platform offers comes at the expense of increased control complexity. We describe our strategy for addressing the control challenges of the gmon architecture and show experimental progress toward fast, high fidelity controlled-Z gates with gmon qubits.
A New Real - Time Fault Detection Methodology for Systems Under Test. Phase 1
NASA Technical Reports Server (NTRS)
Johnson, Roger W.; Jayaram, Sanjay; Hull, Richard A.
1998-01-01
The purpose of this research is focussed on the identification/demonstration of critical technology innovations that will be applied to various applications viz. Detection of automated machine Health Monitoring (BM, real-time data analysis and control of Systems Under Test (SUT). This new innovation using a High Fidelity Dynamic Model-based Simulation (BFDMS) approach will be used to implement a real-time monitoring, Test and Evaluation (T&E) methodology including the transient behavior of the system under test. The unique element of this process control technique is the use of high fidelity, computer generated dynamic models to replicate the behavior of actual Systems Under Test (SUT). It will provide a dynamic simulation capability that becomes the reference truth model, from which comparisons are made with the actual raw/conditioned data from the test elements.
Gilmer, Todd P; Stefancic, Ana; Katz, Marian L; Sklar, Marisa; Tsemberis, Sam; Palinkas, Lawrence A
2014-11-01
Permanent supported housing programs are being implemented throughout the United States. This study examined the relationship between fidelity to the Housing First model and residential outcomes among clients of full service partnerships (FSPs) in California. This study had a mixed-methods design. Quantitative administrative and survey data were used to describe FSP practices and to examine the association between fidelity to Housing First and residential outcomes in the year before and after enrollment of 6,584 FSP clients in 86 programs. Focus groups at 20 FSPs provided qualitative data to enhance the understanding of these findings with actual accounts of housing-related experiences in high- and low-fidelity programs. Prior to enrollment, the mean days of homelessness were greater at high- versus low-fidelity (101 versus 46 days) FSPs. After adjustment for individual characteristics, the analysis found that days spent homeless after enrollment declined by 87 at high-fidelity programs and by 34 at low-fidelity programs. After adjustment for days spent homeless before enrollment, days spent homeless after enrollment declined by 63 at high-fidelity programs and by 53 at low-fidelity programs. After enrollment, clients at high-fidelity programs spent more than 60 additional days in apartments than clients at low-facility programs. Differences were found between high- and low-fidelity FSPs in client choice in housing and how much clients' goals were considered in housing placement. Programs with greater fidelity to the Housing First model enrolled clients with longer histories of homelessness and placed most of them in apartments.
Medical Robotic and Telesurgical Simulation and Education Research
2014-09-01
versions of the device for sale . A B 13 C D Figure 6: The Computer Aided Design of the Dome (A-B) and the last High Fidelity Prototype (C...FRxS Advanced Etc. Etc. Virtual Worlds for Robotic Surgery 31 HumanSim Preview for iPad is available on iTunes VIRTURLHEROES OMS ION O F
Li, Bo; Li, Sheng-Hao; Zhou, Huan-Qiang
2009-06-01
A systematic analysis is performed for quantum phase transitions in a two-dimensional anisotropic spin-1/2 antiferromagnetic XYX model in an external magnetic field. With the help of an innovative tensor network algorithm, we compute the fidelity per lattice site to demonstrate that the field-induced quantum phase transition is unambiguously characterized by a pinch point on the fidelity surface, marking a continuous phase transition. We also compute an entanglement estimator, defined as a ratio between the one-tangle and the sum of squared concurrences, to identify both the factorizing field and the critical point, resulting in a quantitative agreement with quantum Monte Carlo simulation. In addition, the local order parameter is "derived" from the tensor network representation of the system's ground-state wave functions.
Klewicki, J. C.; Chini, G. P.; Gibson, J. F.
2017-01-01
Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585
Klewicki, J C; Chini, G P; Gibson, J F
2017-03-13
Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
ERIC Educational Resources Information Center
Lievens, Filip; Patterson, Fiona
2011-01-01
In high-stakes selection among candidates with considerable domain-specific knowledge and experience, investigations of whether high-fidelity simulations (assessment centers; ACs) have incremental validity over low-fidelity simulations (situational judgment tests; SJTs) are lacking. Therefore, this article integrates research on the validity of…
Computational Aerodynamic Modeling of Small Quadcopter Vehicles
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Ventura Diaz, Patricia; Boyd, D. Douglas; Chan, William M.; Theodore, Colin R.
2017-01-01
High-fidelity computational simulations have been performed which focus on rotor-fuselage and rotor-rotor aerodynamic interactions of small quad-rotor vehicle systems. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, low Mach number preconditioning, and hybrid turbulence modeling. Computational results for isolated rotors are shown to compare well with available experimental data. Computational results in hover reveal the differences between a conventional configuration where the rotors are mounted above the fuselage and an unconventional configuration where the rotors are mounted below the fuselage. Complex flow physics in forward flight is investigated. The goal of this work is to demonstrate that understanding of interactional aerodynamics can be an important factor in design decisions regarding rotor and fuselage placement for next-generation multi-rotor drones.
Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S
2014-12-01
We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.
Silicon quantum processor with robust long-distance qubit couplings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tosi, Guilherme; Mohiyaddin, Fahd A.; Schmitt, Vivien
Practical quantum computers require a large network of highly coherent qubits, interconnected in a design robust against errors. Donor spins in silicon provide state-of-the-art coherence and quantum gate fidelities, in a platform adapted from industrial semiconductor processing. Here we present a scalable design for a silicon quantum processor that does not require precise donor placement and leaves ample space for the routing of interconnects and readout devices. We introduce the flip-flop qubit, a combination of the electron-nuclear spin states of a phosphorus donor that can be controlled by microwave electric fields. Two-qubit gates exploit a second-order electric dipole-dipole interaction, allowingmore » selective coupling beyond the nearest-neighbor, at separations of hundreds of nanometers, while microwave resonators can extend the entanglement to macroscopic distances. We predict gate fidelities within fault-tolerance thresholds using realistic noise models. This design provides a realizable blueprint for scalable spin-based quantum computers in silicon.« less
The use of high fidelity CAD models as the basis for training on complex systems
NASA Technical Reports Server (NTRS)
Miller, Kellie; Tanner, Steve
1993-01-01
During the design phases of large and complex systems such as NASA's Space Station Freedom (SSF), there are few, if any physical prototypes built. This is often due to their expense and the realization that the design is likely to change. This poses a problem for training, maintainability, and operations groups who are tasked to lay the foundation of plans for using these systems. The Virtual Reality and Visualization Laboratory at the Boeing Advanced Computing Group's Huntsville facility is supporting the use of high fidelity, detailed design models that are generated during the initial design phases, for use in training, maintainability and operations exercises. This capability was used in its non-immersive form to great effect at the SSF Critical Design Review (CDR) during February, 1993. Allowing the user to move about within a CAD design supports many efforts, including training and scenario study. We will demonstrate via a video of the Maintainability SSF CDR how this type of approach can be used and why it is so effective in conveying large amounts of information quickly and concisely. We will also demonstrate why high fidelity models are so important for this type of training system and how it's immersive aspects may be exploited as well.
High Fidelity Simulation of Primary Atomization in Diesel Engine Sprays
NASA Astrophysics Data System (ADS)
Ivey, Christopher; Bravo, Luis; Kim, Dokyun
2014-11-01
A high-fidelity numerical simulation of jet breakup and spray formation from a complex diesel fuel injector at ambient conditions has been performed. A full understanding of the primary atomization process in fuel injection of diesel has not been achieved for several reasons including the difficulties accessing the optically dense region. Due to the recent advances in numerical methods and computing resources, high fidelity simulations of atomizing flows are becoming available to provide new insights of the process. In the present study, an unstructured un-split Volume-of-Fluid (VoF) method coupled to a stochastic Lagrangian spray model is employed to simulate the atomization process. A common rail fuel injector is simulated by using a nozzle geometry available through the Engine Combustion Network. The working conditions correspond to a single orifice (90 μm) JP-8 fueled injector operating at an injection pressure of 90 bar, ambient condition at 29 bar, 300 K filled with 100% nitrogen with Rel = 16,071, Wel = 75,334 setting the spray in the full atomization mode. The experimental dataset from Army Research Lab is used for validation in terms of spray global parameters and local droplet distributions. The quantitative comparison will be presented and discussed. Supported by Oak Ridge Associated Universities and the Army Research Laboratory.
Growth of equilibrium structures built from a large number of distinct component types.
Hedges, Lester O; Mannige, Ranjan V; Whitelam, Stephen
2014-09-14
We use simple analytic arguments and lattice-based computer simulations to study the growth of structures made from a large number of distinct component types. Components possess 'designed' interactions, chosen to stabilize an equilibrium target structure in which each component type has a defined spatial position, as well as 'undesigned' interactions that allow components to bind in a compositionally-disordered way. We find that high-fidelity growth of the equilibrium target structure can happen in the presence of substantial attractive undesigned interactions, as long as the energy scale of the set of designed interactions is chosen appropriately. This observation may help explain why equilibrium DNA 'brick' structures self-assemble even if undesigned interactions are not suppressed [Ke et al. Science, 338, 1177, (2012)]. We also find that high-fidelity growth of the target structure is most probable when designed interactions are drawn from a distribution that is as narrow as possible. We use this result to suggest how to choose complementary DNA sequences in order to maximize the fidelity of multicomponent self-assembly mediated by DNA. We also comment on the prospect of growing macroscopic structures in this manner.
BeamDyn: a high-fidelity wind turbine blade solver in the FAST modular framework
Wang, Qi; Sprague, Michael A.; Jonkman, Jason; ...
2017-03-14
Here, this paper presents a numerical implementation of the geometrically exact beam theory based on the Legendre-spectral-finite-element (LSFE) method. The displacement-based geometrically exact beam theory is presented, and the special treatment of three-dimensional rotation parameters is reviewed. An LSFE is a high-order finite element with nodes located at the Gauss-Legendre-Lobatto points. These elements can be an order of magnitude more computationally efficient than low-order finite elements for a given accuracy level. The new module, BeamDyn, is implemented in the FAST modularization framework for dynamic simulation of highly flexible composite-material wind turbine blades within the FAST aeroelastic engineering model. The frameworkmore » allows for fully interactive simulations of turbine blades in operating conditions. Numerical examples are provided to validate BeamDyn and examine the LSFE performance as well as the coupling algorithm in the FAST modularization framework. BeamDyn can also be used as a stand-alone high-fidelity beam tool.« less
Towards an Automated Full-Turbofan Engine Numerical Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Turner, Mark G.; Norris, Andrew; Veres, Joseph P.
2003-01-01
The objective of this study was to demonstrate the high-fidelity numerical simulation of a modern high-bypass turbofan engine. The simulation utilizes the Numerical Propulsion System Simulation (NPSS) thermodynamic cycle modeling system coupled to a high-fidelity full-engine model represented by a set of coupled three-dimensional computational fluid dynamic (CFD) component models. Boundary conditions from the balanced, steady-state cycle model are used to define component boundary conditions in the full-engine model. Operating characteristics of the three-dimensional component models are integrated into the cycle model via partial performance maps generated automatically from the CFD flow solutions using one-dimensional meanline turbomachinery programs. This paper reports on the progress made towards the full-engine simulation of the GE90-94B engine, highlighting the generation of the high-pressure compressor partial performance map. The ongoing work will provide a system to evaluate the steady and unsteady aerodynamic and mechanical interactions between engine components at design and off-design operating conditions.
BeamDyn: a high-fidelity wind turbine blade solver in the FAST modular framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qi; Sprague, Michael A.; Jonkman, Jason
Here, this paper presents a numerical implementation of the geometrically exact beam theory based on the Legendre-spectral-finite-element (LSFE) method. The displacement-based geometrically exact beam theory is presented, and the special treatment of three-dimensional rotation parameters is reviewed. An LSFE is a high-order finite element with nodes located at the Gauss-Legendre-Lobatto points. These elements can be an order of magnitude more computationally efficient than low-order finite elements for a given accuracy level. The new module, BeamDyn, is implemented in the FAST modularization framework for dynamic simulation of highly flexible composite-material wind turbine blades within the FAST aeroelastic engineering model. The frameworkmore » allows for fully interactive simulations of turbine blades in operating conditions. Numerical examples are provided to validate BeamDyn and examine the LSFE performance as well as the coupling algorithm in the FAST modularization framework. BeamDyn can also be used as a stand-alone high-fidelity beam tool.« less
Sorensen, Mads Solvsten; Mosegaard, Jesper; Trier, Peter
2009-06-01
Existing virtual simulators for middle ear surgery are based on 3-dimensional (3D) models from computed tomographic or magnetic resonance imaging data in which image quality is limited by the lack of detail (maximum, approximately 50 voxels/mm3), natural color, and texture of the source material.Virtual training often requires the purchase of a program, a customized computer, and expensive peripherals dedicated exclusively to this purpose. The Visible Ear freeware library of digital images from a fresh-frozen human temporal bone was segmented, and real-time volume rendered as a 3D model of high-fidelity, true color, and great anatomic detail and realism of the surgically relevant structures. A haptic drilling model was developed for surgical interaction with the 3D model. Realistic visualization in high-fidelity (approximately 125 voxels/mm3) and true color, 2D, or optional anaglyph stereoscopic 3D was achieved on a standard Core 2 Duo personal computer with a GeForce 8,800 GTX graphics card, and surgical interaction was provided through a relatively inexpensive (approximately $2,500) Phantom Omni haptic 3D pointing device. This prototype is published for download (approximately 120 MB) as freeware at http://www.alexandra.dk/ves/index.htm.With increasing personal computer performance, future versions may include enhanced resolution (up to 8,000 voxels/mm3) and realistic interaction with deformable soft tissue components such as skin, tympanic membrane, dura, and cholesteatomas-features some of which are not possible with computed tomographic-/magnetic resonance imaging-based systems.
Development and application of theoretical models for Rotating Detonation Engine flowfields
NASA Astrophysics Data System (ADS)
Fievisohn, Robert
As turbine and rocket engine technology matures, performance increases between successive generations of engine development are becoming smaller. One means of accomplishing significant gains in thermodynamic performance and power density is to use detonation-based heat release instead of deflagration. This work is focused on developing and applying theoretical models to aid in the design and understanding of Rotating Detonation Engines (RDEs). In an RDE, a detonation wave travels circumferentially along the bottom of an annular chamber where continuous injection of fresh reactants sustains the detonation wave. RDEs are currently being designed, tested, and studied as a viable option for developing a new generation of turbine and rocket engines that make use of detonation heat release. One of the main challenges in the development of RDEs is to understand the complex flowfield inside the annular chamber. While simplified models are desirable for obtaining timely performance estimates for design analysis, one-dimensional models may not be adequate as they do not provide flow structure information. In this work, a two-dimensional physics-based model is developed, which is capable of modeling the curved oblique shock wave, exit swirl, counter-flow, detonation inclination, and varying pressure along the inflow boundary. This is accomplished by using a combination of shock-expansion theory, Chapman-Jouguet detonation theory, the Method of Characteristics (MOC), and other compressible flow equations to create a shock-fitted numerical algorithm and generate an RDE flowfield. This novel approach provides a numerically efficient model that can provide performance estimates as well as details of the large-scale flow structures in seconds on a personal computer. Results from this model are validated against high-fidelity numerical simulations that may require a high-performance computing framework to provide similar performance estimates. This work provides a designer a new tool to conduct large-scale parametric studies to optimize a design space before conducting computationally-intensive, high-fidelity simulations that may be used to examine additional effects. The work presented in this thesis not only bridges the gap between simple one-dimensional models and high-fidelity full numerical simulations, but it also provides an effective tool for understanding and exploring RDE flow processes.
Chinnugounder, Sankar; Hippe, Daniel S; Maximin, Suresh; O'Malley, Ryan B; Wang, Carolyn L
2015-01-01
Although subjective and objective benefits of high-fidelity simulation have been reported in medicine, there has been slow adoption in radiology. The purpose of our study was to identify the perceived barriers in the use of high-fidelity hands-on simulation for contrast reaction management training. An IRB exempt 32 questions online web survey was sent to 179 non-military radiology residency program directors listed in the Fellowship and Residency Electronic Interactive Database Access system (FREIDA). Survey questions included the type of contrast reaction management training, cost, time commitment of residents and faculty, and the reasons for not using simulation training. Responses from the survey were summarized as count (percentage), mean ± standard deviation (SD), or median (range). 84 (47%) of 179 programs responded, of which 88% offered CRM training. Most (72%) conducted the CRM training annually while only 4% conducted it more frequently. Didactic lecture was the most frequently used training modality (97%), followed by HFS (30%) and computer-based simulation (CBS) (19%); 5.5% used both HFS and CBS. Of the 51 programs that offer CRM training but do not use HFS, the most common reason reported was insufficient availability (41%). Other reported reasons included cost (33%), no access to simulation centers (33%), lack of trained faculty (27%) and time constraints (27%). Although high-fidelity hands-on simulation training is the best way to reproduce real-life contrast reaction scenarios, many institutions do not provide this training due to constraints such as cost, lack of access or insufficient availability of simulation labs, and lack of trained faculty. As a specialty, radiology needs to better address these barriers at both an institutional and national level. Copyright © 2015 Mosby, Inc. All rights reserved.
Leakage of The Quantum Dot Hybrid Qubit in The Strong Driving Regime
NASA Astrophysics Data System (ADS)
Yang, Yuan-Chi; Friesen, Mark; Coppersmith, S. N.
Recent experimental demonstrations of high-fidelity single-qubit gates suggest that the quantum dot hybrid qubit is a promising candidate for large-scale quantum computing. The qubit is comprised of three electrons in a double quantum dot, and can be protected from charge noise by operating in an extended sweet-spot regime. Gate operations are based on exchange interactions mediated by an excited state. However, strong resonant driving causes unwanted leakage into the excited state. Here, we theoretically analyze leakage caused by strong driving, and explore methods for increasing gate fidelities. This work was supported in part by ARO (W911NF-12-0607), NSF (PHY-1104660), ONR (N00014-15-1-0029), and the University of Wisconsin-Madison.
Continuous-variable controlled-Z gate using an atomic ensemble
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Mingfeng; Jiang Nianquan; Jin Qingli
2011-06-15
The continuous-variable controlled-Z gate is a canonical two-mode gate for universal continuous-variable quantum computation. It is considered as one of the most fundamental continuous-variable quantum gates. Here we present a scheme for realizing continuous-variable controlled-Z gate between two optical beams using an atomic ensemble. The gate is performed by simply sending the two beams propagating in two orthogonal directions twice through a spin-squeezed atomic medium. Its fidelity can run up to one if the input atomic state is infinitely squeezed. Considering the noise effects due to atomic decoherence and light losses, we show that the observed fidelities of the schememore » are still quite high within presently available techniques.« less
Transfer of Training from Simulators to Operational Equipment--Are Simulators Effective?
ERIC Educational Resources Information Center
Thomson, Douglas R.
1989-01-01
Examines the degree of fidelity required of a computer simulation to ensure maximum transfer of training. Simulators used in the military services for training pilots are described; relationships between fidelity, transfer, and cost are explored; and feedback to the student and measures of training effectiveness are discussed. (nine references)…
Fidelity of Majorana-based quantum operations
NASA Astrophysics Data System (ADS)
Tanhayi Ahari, Mostafa; Ortiz, Gerardo; Seradjeh, Babak
2015-03-01
It is well known that one-dimensional p-wave superconductor, the so-called Kitaev model, has topologically distinct phases that are distinguished by the presence of Majorana fermions. Owing to their topological protection, these Majorana fermions have emerged as candidates for fault-tolerant quantum computation. They furnish the operation of such a computation via processes that produce, braid, and annihilate them in pairs. In this work we study some of these processes from the dynamical perspective. In particular, we determine the fidelity of the Majorana fermions when they are produced or annihilated by tuning the system through the corresponding topological phase transition. For a simple linear protocol, we derive analytical expressions for fidelity and test various perturbative schemes. For more general protocols, we present exact numerics. Our results are relevant for the operation of Majorana-based quantum gates and quantum memories.
Non-unitary probabilistic quantum computing
NASA Technical Reports Server (NTRS)
Gingrich, Robert M.; Williams, Colin P.
2004-01-01
We present a method for designing quantum circuits that perform non-unitary quantum computations on n-qubit states probabilistically, and give analytic expressions for the success probability and fidelity.
Hoekstra, Femke; van Offenbeek, Marjolein A G; Dekker, Rienk; Hettinga, Florentina J; Hoekstra, Trynke; van der Woude, Lucas H V; van der Schans, Cees P
2017-12-01
Although the importance of evaluating implementation fidelity is acknowledged, little is known about heterogeneity in fidelity over time. This study aims to generate insight into the heterogeneity in implementation fidelity trajectories of a health promotion program in multidisciplinary settings and the relationship with changes in patients' health behavior. This study used longitudinal data from the nationwide implementation of an evidence-informed physical activity promotion program in Dutch rehabilitation care. Fidelity scores were calculated based on annual surveys filled in by involved professionals (n = ± 70). Higher fidelity scores indicate a more complete implementation of the program's core components. A hierarchical cluster analysis was conducted on the implementation fidelity scores of 17 organizations at three different time points. Quantitative and qualitative data were used to explore organizational and professional differences between identified trajectories. Regression analyses were conducted to determine differences in patient outcomes. Three trajectories were identified as the following: 'stable high fidelity' (n = 9), 'moderate and improving fidelity' (n = 6), and 'unstable fidelity' (n = 2). The stable high fidelity organizations were generally smaller, started earlier, and implemented the program in a more structured way compared to moderate and improving fidelity organizations. At the implementation period's start and end, support from physicians and physiotherapists, professionals' appreciation, and program compatibility were rated more positively by professionals working in stable high fidelity organizations as compared to the moderate and improving fidelity organizations (p < .05). Qualitative data showed that the stable high fidelity organizations had often an explicit vision and strategy about the implementation of the program. Intriguingly, the trajectories were not associated with patients' self-reported physical activity outcomes (adjusted model β = - 651.6, t(613) = - 1032, p = .303). Differences in organizational-level implementation fidelity trajectories did not result in outcome differences at patient-level. This suggests that an effective implementation fidelity trajectory is contingent on the local organization's conditions. More specifically, achieving stable high implementation fidelity required the management of tensions: realizing a localized change vision, while safeguarding the program's standardized core components and engaging the scarce physicians throughout the process. When scaling up evidence-informed health promotion programs, we propose to tailor the management of implementation tensions to local organizations' starting position, size, and circumstances. The Netherlands National Trial Register NTR3961 . Registered 18 April 2013.
The Need for High Fidelity Lunar Regolith Simulants
NASA Technical Reports Server (NTRS)
Gaier, James R.
2007-01-01
The case is made for the need to have high fidelity lunar regolith simulants to verify the performance of structures and mechanisms to be used on the lunar surface. Minor constituents will in some cases have major consequences. Small amounts of sulfur in the regolith can poison catalysts, and metallic iron on the surface of nano-sized dust particles may cause a dramatic increase in its toxicity. So the definition of a high fidelity simulant is application dependent. For example, in situ resource utilization will require high fidelity in chemistry, meaning careful attention to the minor components and phases; but some other applications, such as the abrasive effects on suit fabrics, might be relatively insensitive to minor component chemistry. The lunar environment itself will change the surface chemistry of the simulant, so to have a high fidelity simulant at must be used in a high fidelity simulated environment to get a high fidelity simulation. Research must be conducted to determine how sensitive technologies will be to minor components and environmental factors before they can be dismissed as unimportant.
High-fidelity readout and control of a nuclear spin qubit in silicon.
Pla, Jarryd J; Tan, Kuan Y; Dehollain, Juan P; Lim, Wee H; Morton, John J L; Zwanenburg, Floris A; Jamieson, David N; Dzurak, Andrew S; Morello, Andrea
2013-04-18
Detection of nuclear spin precession is critical for a wide range of scientific techniques that have applications in diverse fields including analytical chemistry, materials science, medicine and biology. Fundamentally, it is possible because of the extreme isolation of nuclear spins from their environment. This isolation also makes single nuclear spins desirable for quantum-information processing, as shown by pioneering studies on nitrogen-vacancy centres in diamond. The nuclear spin of a (31)P donor in silicon is very promising as a quantum bit: bulk measurements indicate that it has excellent coherence times and silicon is the dominant material in the microelectronics industry. Here we demonstrate electrical detection and coherent manipulation of a single (31)P nuclear spin qubit with sufficiently high fidelities for fault-tolerant quantum computing. By integrating single-shot readout of the electron spin with on-chip electron spin resonance, we demonstrate quantum non-demolition and electrical single-shot readout of the nuclear spin with a readout fidelity higher than 99.8 percent-the highest so far reported for any solid-state qubit. The single nuclear spin is then operated as a qubit by applying coherent radio-frequency pulses. For an ionized (31)P donor, we find a nuclear spin coherence time of 60 milliseconds and a one-qubit gate control fidelity exceeding 98 percent. These results demonstrate that the dominant technology of modern electronics can be adapted to host a complete electrical measurement and control platform for nuclear-spin-based quantum-information processing.
Validation of NASCAP-2K Spacecraft-Environment Interactions Calculations
NASA Technical Reports Server (NTRS)
Davis, V. A.; Mandell, M. J.; Gardner, B. M.; Mikellides, I. G.; Neergaard, L. F.; Cooke, D. L.; Minor, J.
2004-01-01
The recently released Nascap-2k, version 2.0, three-dimensional computer code models interactions between spacecraft surfaces and low-earth-orbit, geosynchronous, auroral, and interplanetary plasma environments. It replaces the earlier three-dimensional spacecraft interactions codes NASCAP/GEO, NASCAP/LEO, POLAR, and DynaPAC. Nascap-2k has improved numeric techniques, a modern user interface, and a simple, interactive satellite surface definition module (Object ToolKit). We establish the accuracy of Nascap-2k both by comparing computed currents and potentials with analytic results and by comparing Nascap-2k results with published calculations using the earlier codes. Nascap-2k predicts Langmuir-Blodgett or Parker-Murphy current collection for a nearly spherical (100 surfaces) satellite in a short Debye length plasma depending on the absence or presence of a magnetic field. A low fidelity (in geometry and time) Nascap-2k geosynchronous charging calculation gives the same results as the corresponding low fidelity NASCAP/GEO calculation. A high fidelity calculation (using the Nascap-2k improved geometry and time stepping capabilities) gives higher potentials, which are more consistent with typical observations. Nascap-2k predicts the same current as a function of applied potential as was observed and calculated by NASCAP/LEO for the SPEAR I rocket with a bipolar sheath. A Nascap-2k DMSP charging calculation gives results similar to those obtained using POLAR and consistent with observation.
IoGET: Internet of Geophysical and Environmental Things
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudunuru, Maruti Kumar
The objective of this project is to provide novel and fast reduced-order models for onboard computation at sensor nodes for real-time analysis. The approach will require that LANL perform high-fidelity numerical simulations, construct simple reduced-order models (ROMs) using machine learning and signal processing algorithms, and use real-time data analysis for ROMs and compressive sensing at sensor nodes.
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-09-01
Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in exascale simulations.
NASA Astrophysics Data System (ADS)
Pierson, Kyle D.; Hochhalter, Jacob D.; Spear, Ashley D.
2018-05-01
Systematic correlation analysis was performed between simulated micromechanical fields in an uncracked polycrystal and the known path of an eventual fatigue-crack surface based on experimental observation. Concurrent multiscale finite-element simulation of cyclic loading was performed using a high-fidelity representation of grain structure obtained from near-field high-energy x-ray diffraction microscopy measurements. An algorithm was developed to parameterize and systematically correlate the three-dimensional (3D) micromechanical fields from simulation with the 3D fatigue-failure surface from experiment. For comparison, correlation coefficients were also computed between the micromechanical fields and hypothetical, alternative surfaces. The correlation of the fields with hypothetical surfaces was found to be consistently weaker than that with the known crack surface, suggesting that the micromechanical fields of the cyclically loaded, uncracked microstructure might provide some degree of predictiveness for microstructurally small fatigue-crack paths, although the extent of such predictiveness remains to be tested. In general, gradients of the field variables exhibit stronger correlations with crack path than the field variables themselves. Results from the data-driven approach implemented here can be leveraged in future model development for prediction of fatigue-failure surfaces (for example, to facilitate univariate feature selection required by convolution-based models).
Implementation fidelity of a self-management course for epilepsy: method and assessment.
Wojewodka, G; Hurley, S; Taylor, S J C; Noble, A J; Ridsdale, L; Goldstein, L H
2017-07-11
Complex interventions such as self-management courses are difficult to evaluate due to the many interacting components. The way complex interventions are delivered can influence the effect they have for patients, and can impact the interpretation of outcomes of clinical trials. Implementation fidelity evaluates whether complex interventions are delivered according to protocol. Such assessments have been used for one-to-one psychological interventions; however, the science is still developing for group interventions. We developed and tested an instrument to measure implementation fidelity of a two-day self-management course for people with epilepsy, SMILE(UK). Using audio recordings, we looked at adherence and competence of course facilitators. Adherence was assessed by checklists. Competence was measured by scoring group interaction, an overall impression score and facilitator "didacticism". To measure "didacticism", we developed a novel way to calculate facilitator speech using computer software. Using this new instrument, implementation fidelity of SMILE(UK) was assessed on three modules of the course, for 28% of all courses delivered. Using the instrument for adherence, scores from two independent raters showed substantial agreement with weighted Kappa of 0.67 and high percent agreement of 81.2%. For didacticism, the results from both raters were highly correlated with an intraclass coefficient of 0.97 (p < 0.0001). We found that the courses were delivered with a good level of adherence (> 50% of scored items received the maximum of 2 points) and high competence. Groups were interactive (mean score: 1.9-2.0 out of 2) and the overall impression was on average assessed as "good". Didacticism varied from 42% to 93% of total module time and was not associated with the other competence scores. The instrument devised to measure implementation fidelity was reproducible and easy to use. The courses for the SMILE(UK) study were delivered with a good level of adherence to protocol while not compromising facilitator competence. ISRCTN57937389 .
Silicon quantum processor with robust long-distance qubit couplings.
Tosi, Guilherme; Mohiyaddin, Fahd A; Schmitt, Vivien; Tenberg, Stefanie; Rahman, Rajib; Klimeck, Gerhard; Morello, Andrea
2017-09-06
Practical quantum computers require a large network of highly coherent qubits, interconnected in a design robust against errors. Donor spins in silicon provide state-of-the-art coherence and quantum gate fidelities, in a platform adapted from industrial semiconductor processing. Here we present a scalable design for a silicon quantum processor that does not require precise donor placement and leaves ample space for the routing of interconnects and readout devices. We introduce the flip-flop qubit, a combination of the electron-nuclear spin states of a phosphorus donor that can be controlled by microwave electric fields. Two-qubit gates exploit a second-order electric dipole-dipole interaction, allowing selective coupling beyond the nearest-neighbor, at separations of hundreds of nanometers, while microwave resonators can extend the entanglement to macroscopic distances. We predict gate fidelities within fault-tolerance thresholds using realistic noise models. This design provides a realizable blueprint for scalable spin-based quantum computers in silicon.Quantum computers will require a large network of coherent qubits, connected in a noise-resilient way. Tosi et al. present a design for a quantum processor based on electron-nuclear spins in silicon, with electrical control and coupling schemes that simplify qubit fabrication and operation.
Multilevel UQ strategies for large-scale multiphysics applications: PSAAP II solar receiver
NASA Astrophysics Data System (ADS)
Jofre, Lluis; Geraci, Gianluca; Iaccarino, Gianluca
2017-06-01
Uncertainty quantification (UQ) plays a fundamental part in building confidence in predictive science. Of particular interest is the case of modeling and simulating engineering applications where, due to the inherent complexity, many uncertainties naturally arise, e.g. domain geometry, operating conditions, errors induced by modeling assumptions, etc. In this regard, one of the pacing items, especially in high-fidelity computational fluid dynamics (CFD) simulations, is the large amount of computing resources typically required to propagate incertitude through the models. Upcoming exascale supercomputers will significantly increase the available computational power. However, UQ approaches cannot entrust their applicability only on brute force Monte Carlo (MC) sampling; the large number of uncertainty sources and the presence of nonlinearities in the solution will make straightforward MC analysis unaffordable. Therefore, this work explores the multilevel MC strategy, and its extension to multi-fidelity and time convergence, to accelerate the estimation of the effect of uncertainties. The approach is described in detail, and its performance demonstrated on a radiated turbulent particle-laden flow case relevant to solar energy receivers (PSAAP II: Particle-laden turbulence in a radiation environment). Investigation funded by DoE's NNSA under PSAAP II.
Advanced Usage of Vehicle Sketch Pad for CFD-Based Conceptual Design
NASA Technical Reports Server (NTRS)
Ordaz, Irian; Li, Wu
2013-01-01
Conceptual design is the most fluid phase of aircraft design. It is important to be able to perform large scale design space exploration of candidate concepts that can achieve the design intent to avoid more costly configuration changes in later stages of design. This also means that conceptual design is highly dependent on the disciplinary analysis tools to capture the underlying physics accurately. The required level of analysis fidelity can vary greatly depending on the application. Vehicle Sketch Pad (VSP) allows the designer to easily construct aircraft concepts and make changes as the design matures. More recent development efforts have enabled VSP to bridge the gap to high-fidelity analysis disciplines such as computational fluid dynamics and structural modeling for finite element analysis. This paper focuses on the current state-of-the-art geometry modeling for the automated process of analysis and design of low-boom supersonic concepts using VSP and several capability-enhancing design tools.
Numerical exploration of dissimilar supersonic coaxial jets mixing
NASA Astrophysics Data System (ADS)
Dharavath, Malsur; Manna, P.; Chakraborty, Debasis
2015-06-01
Mixing of two coaxial supersonic dissimilar gases in free jet environment is numerically explored. Three dimensional RANS equations with a k-ε turbulence model are solved using commercial CFD software. Two important experimental cases (RELIEF experiments) representing compressible mixing flow phenomenon under scramjet operating conditions for which detail profiles of thermochemical variables are available are taken as validation cases. Two different convective Mach numbers 0.16 and 0.70 are considered for simulations. The computed growth rate, pitot pressure and mass fraction profiles for both these cases match extremely well with experimental values and results of other high fidelity numerical results both in far field and near field regions. For higher convective Mach number predicted growth rate matches nicely with empirical Dimotakis curve; whereas for lower convective Mach number, predicted growth rate is higher. It is shown that well resolved RANS calculation can capture the mixing of two supersonic dissimilar gases better than high fidelity LES calculations.
FY17 Status Report on NEAMS Neutronics Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C. H.; Jung, Y. S.; Smith, M. A.
2017-09-30
Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less
The human factors of workstation telepresence
NASA Technical Reports Server (NTRS)
Smith, Thomas J.; Smith, Karl U.
1990-01-01
The term workstation telepresence has been introduced to describe human-telerobot compliance, which enables the human operator to effectively project his/her body image and behavioral skills to control of the telerobot itself. Major human-factors considerations for establishing high fidelity workstation telepresence during human-telerobot operation are discussed. Telerobot workstation telepresence is defined by the proficiency and skill with which the operator is able to control sensory feedback from direct interaction with the workstation itself, and from workstation-mediated interaction with the telerobot. Numerous conditions influencing such control have been identified. This raises the question as to what specific factors most critically influence the realization of high fidelity workstation telepresence. The thesis advanced here is that perturbations in sensory feedback represent a major source of variability in human performance during interactive telerobot operation. Perturbed sensory feedback research over the past three decades has established that spatial transformations or temporal delays in sensory feedback engender substantial decrements in interactive task performance, which training does not completely overcome. A recently developed social cybernetic model of human-computer interaction can be used to guide this approach, based on computer-mediated tracking and control of sensory feedback. How the social cybernetic model can be employed for evaluating the various modes, patterns, and integrations of interpersonal, team, and human-computer interactions which play a central role is workstation telepresence are discussed.
A CFD/CSD Interaction Methodology for Aircraft Wings
NASA Technical Reports Server (NTRS)
Bhardwaj, Manoj K.
1997-01-01
With advanced subsonic transports and military aircraft operating in the transonic regime, it is becoming important to determine the effects of the coupling between aerodynamic loads and elastic forces. Since aeroelastic effects can contribute significantly to the design of these aircraft, there is a strong need in the aerospace industry to predict these aero-structure interactions computationally. To perform static aeroelastic analysis in the transonic regime, high fidelity computational fluid dynamics (CFD) analysis tools must be used in conjunction with high fidelity computational structural fluid dynamics (CSD) analysis tools due to the nonlinear behavior of the aerodynamics in the transonic regime. There is also a need to be able to use a wide variety of CFD and CSD tools to predict these aeroelastic effects in the transonic regime. Because source codes are not always available, it is necessary to couple the CFD and CSD codes without alteration of the source codes. In this study, an aeroelastic coupling procedure is developed which will perform static aeroelastic analysis using any CFD and CSD code with little code integration. The aeroelastic coupling procedure is demonstrated on an F/A-18 Stabilator using NASTD (an in-house McDonnell Douglas CFD code) and NASTRAN. In addition, the Aeroelastic Research Wing (ARW-2) is used for demonstration of the aeroelastic coupling procedure by using ENSAERO (NASA Ames Research Center CFD code) and a finite element wing-box code (developed as part of this research).
Probabilistic Prognosis of Non-Planar Fatigue Crack Growth
NASA Technical Reports Server (NTRS)
Leser, Patrick E.; Newman, John A.; Warner, James E.; Leser, William P.; Hochhalter, Jacob D.; Yuan, Fuh-Gwo
2016-01-01
Quantifying the uncertainty in model parameters for the purpose of damage prognosis can be accomplished utilizing Bayesian inference and damage diagnosis data from sources such as non-destructive evaluation or structural health monitoring. The number of samples required to solve the Bayesian inverse problem through common sampling techniques (e.g., Markov chain Monte Carlo) renders high-fidelity finite element-based damage growth models unusable due to prohibitive computation times. However, these types of models are often the only option when attempting to model complex damage growth in real-world structures. Here, a recently developed high-fidelity crack growth model is used which, when compared to finite element-based modeling, has demonstrated reductions in computation times of three orders of magnitude through the use of surrogate models and machine learning. The model is flexible in that only the expensive computation of the crack driving forces is replaced by the surrogate models, leaving the remaining parameters accessible for uncertainty quantification. A probabilistic prognosis framework incorporating this model is developed and demonstrated for non-planar crack growth in a modified, edge-notched, aluminum tensile specimen. Predictions of remaining useful life are made over time for five updates of the damage diagnosis data, and prognostic metrics are utilized to evaluate the performance of the prognostic framework. Challenges specific to the probabilistic prognosis of non-planar fatigue crack growth are highlighted and discussed in the context of the experimental results.
Experimental demonstration of four-photon entanglement and high-fidelity teleportation.
Pan, J W; Daniell, M; Gasparoni, S; Weihs, G; Zeilinger, A
2001-05-14
We experimentally demonstrate observation of highly pure four-photon GHZ entanglement produced by parametric down-conversion and a projective measurement. At the same time this also demonstrates teleportation of entanglement with very high purity. Not only does the achieved high visibility enable various novel tests of quantum nonlocality, it also opens the possibility to experimentally investigate various quantum computation and communication schemes with linear optics. Our technique can, in principle, be used to produce entanglement of arbitrarily high order or, equivalently, teleportation and entanglement swapping over multiple stages.
NASA Technical Reports Server (NTRS)
Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris
2011-01-01
A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.
Resource quality of a symmetry-protected topologically ordered phase for quantum computation.
Miller, Jacob; Miyake, Akimasa
2015-03-27
We investigate entanglement naturally present in the 1D topologically ordered phase protected with the on-site symmetry group of an octahedron as a potential resource for teleportation-based quantum computation. We show that, as long as certain characteristic lengths are finite, all its ground states have the capability to implement any unit-fidelity one-qubit gate operation asymptotically as a key computational building block. This feature is intrinsic to the entire phase, in that perfect gate fidelity coincides with perfect string order parameters under a state-insensitive renormalization procedure. Our approach may pave the way toward a novel program to classify quantum many-body systems based on their operational use for quantum information processing.
Resource Quality of a Symmetry-Protected Topologically Ordered Phase for Quantum Computation
NASA Astrophysics Data System (ADS)
Miller, Jacob; Miyake, Akimasa
2015-03-01
We investigate entanglement naturally present in the 1D topologically ordered phase protected with the on-site symmetry group of an octahedron as a potential resource for teleportation-based quantum computation. We show that, as long as certain characteristic lengths are finite, all its ground states have the capability to implement any unit-fidelity one-qubit gate operation asymptotically as a key computational building block. This feature is intrinsic to the entire phase, in that perfect gate fidelity coincides with perfect string order parameters under a state-insensitive renormalization procedure. Our approach may pave the way toward a novel program to classify quantum many-body systems based on their operational use for quantum information processing.
HSCT4.0 Application: Software Requirements Specification
NASA Technical Reports Server (NTRS)
Salas, A. O.; Walsh, J. L.; Mason, B. H.; Weston, R. P.; Townsend, J. C.; Samareh, J. A.; Green, L. L.
2001-01-01
The software requirements for the High Performance Computing and Communication Program High Speed Civil Transport application project, referred to as HSCT4.0, are described. The objective of the HSCT4.0 application project is to demonstrate the application of high-performance computing techniques to the problem of multidisciplinary design optimization of a supersonic transport configuration, using high-fidelity analysis simulations. Descriptions of the various functions (and the relationships among them) that make up the multidisciplinary application as well as the constraints on the software design arc provided. This document serves to establish an agreement between the suppliers and the customer as to what the HSCT4.0 application should do and provides to the software developers the information necessary to design and implement the system.
Discontinuous Galerkin Methods and High-Speed Turbulent Flows
NASA Astrophysics Data System (ADS)
Atak, Muhammed; Larsson, Johan; Munz, Claus-Dieter
2014-11-01
Discontinuous Galerkin methods gain increasing importance within the CFD community as they combine arbitrary high order of accuracy in complex geometries with parallel efficiency. Particularly the discontinuous Galerkin spectral element method (DGSEM) is a promising candidate for both the direct numerical simulation (DNS) and large eddy simulation (LES) of turbulent flows due to its excellent scaling attributes. In this talk, we present a DNS of a compressible turbulent boundary layer along a flat plate at a free-stream Mach number of M = 2.67 and assess the computational efficiency of the DGSEM at performing high-fidelity simulations of both transitional and turbulent boundary layers. We compare the accuracy of the results as well as the computational performance to results using a high order finite difference method.
Rijsdijk, Liesbeth E; Bos, Arjan E R; Lie, Rico; Leerlooijer, Joanne N; Eiling, Ellen; Atema, Vera; Gebhardt, Winifred A; Ruiter, Robert A C
2014-04-01
This article presents a process evaluation of the implementation of the sex education programme the World Starts With Me (WSWM) for secondary school students in Uganda. The purpose of this mixed-methods study was to examine factors associated with dose delivered (number of lessons implemented) and fidelity of implementation (implementation according to the manual), as well as to identify the main barriers and facilitators of implementation. Teachers' confidence in teaching WSWM was negatively associated with dose delivered. Confidence in educating and discussing sexuality issues in class was positively associated with fidelity of implementation, whereas the importance teachers attached to open sex education showed a negative association with fidelity. Main barriers for implementing WSWM were lack of time, unavailability of computers, lack of student manuals and lack of financial support and rewards. Other barriers for successful implementation were related to high turnover of staff and insufficient training and guidance of teachers. Teachers' beliefs/attitudes towards sexuality of adolescents, condom use and sex education were found to be important socio-cognitive factors intervening with full fidelity of implementation. These findings can be used to improve the intervention implementation and to better plan for large-scale dissemination of school-based sex education programmes in sub-Saharan Africa.
Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow
NASA Astrophysics Data System (ADS)
Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca
2017-11-01
The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.
NASA Technical Reports Server (NTRS)
Kindall, S. M.
1980-01-01
The computer code for the trajectory processor (#TRAJ) of the high fidelity relative motion program is described. The #TRAJ processor is a 12-degrees-of-freedom trajectory integrator (6 degrees of freedom for each of two vehicles) which can be used to generate digital and graphical data describing the relative motion of the Space Shuttle Orbiter and a free-flying cylindrical payload. A listing of the code, coding standards and conventions, detailed flow charts, and discussions of the computational logic are included.
Clark, Susan M; Fu, Kai-Mei C; Ladd, Thaddeus D; Yamamoto, Yoshihisa
2007-07-27
We describe a fast quantum computer based on optically controlled electron spins in charged quantum dots that are coupled to microcavities. This scheme uses broadband optical pulses to rotate electron spins and provide the clock signal to the system. Nonlocal two-qubit gates are performed by phase shifts induced by electron spins on laser pulses propagating along a shared waveguide. Numerical simulations of this scheme demonstrate high-fidelity single-qubit and two-qubit gates with operation times comparable to the inverse Zeeman frequency.
Automated Parameter Studies Using a Cartesian Method
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosimis, Michael J.; Nemec, Marian
2004-01-01
Computational Fluid Dynamics (CFD) is now routinely used to analyze isolated points in a design space by performing steady-state computations at fixed flight conditions (Mach number, angle of attack, sideslip), for a fixed geometric configuration of interest. This "point analysis" provides detailed information about the flowfield, which aides an engineer in understanding, or correcting, a design. A point analysis is typically performed using high fidelity methods at a handful of critical design points, e.g. a cruise or landing configuration, or a sample of points along a flight trajectory.
Multiphase Fluid Dynamics for Spacecraft Applications
NASA Astrophysics Data System (ADS)
Shyy, W.; Sim, J.
2011-09-01
Multiphase flows involving moving interfaces between different fluids/phases are observed in nature as well as in a wide range of engineering applications. With the recent development of high fidelity computational techniques, a number of challenging multiphase flow problems can now be computed. We introduce the basic notion of the main categories of multiphase flow computation; Lagrangian, Eulerian, and Eulerian-Lagrangian techniques to represent and follow interface, and sharp and continuous interface methods to model interfacial dynamics. The marker-based adaptive Eulerian-Lagrangian method, which is one of the most popular methods, is highlighted with microgravity and space applications including droplet collision and spacecraft liquid fuel tank surface stability.
Experimental Investigation of Project Orion Crew Exploration Vehicle Aeroheating in AEDC Tunnel 9
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Horvath, Thomas J.; Berger, Karen T.; Lillard, Randolph P.; Kirk, Benjamin S.; Coblish, Joseph J.; Norris, Joseph D.
2008-01-01
An investigation of the aeroheating environment of the Project Orion Crew Entry Vehicle has been performed in the Arnold Engineering Development Center Tunnel 9. The goals of this test were to measure turbulent heating augmentation levels on the heat shield and to obtain high-fidelity heating data for assessment of computational fluid dynamics methods. Laminar and turbulent predictions were generated for all wind tunnel test conditions and comparisons were performed with the data for the purpose of helping to define uncertainty margins for the computational method. Data from both the wind tunnel test and the computational study are presented herein.
One-way quantum computing in superconducting circuits
NASA Astrophysics Data System (ADS)
Albarrán-Arriagada, F.; Alvarado Barrios, G.; Sanz, M.; Romero, G.; Lamata, L.; Retamal, J. C.; Solano, E.
2018-03-01
We propose a method for the implementation of one-way quantum computing in superconducting circuits. Measurement-based quantum computing is a universal quantum computation paradigm in which an initial cluster state provides the quantum resource, while the iteration of sequential measurements and local rotations encodes the quantum algorithm. Up to now, technical constraints have limited a scalable approach to this quantum computing alternative. The initial cluster state can be generated with available controlled-phase gates, while the quantum algorithm makes use of high-fidelity readout and coherent feedforward. With current technology, we estimate that quantum algorithms with above 20 qubits may be implemented in the path toward quantum supremacy. Moreover, we propose an alternative initial state with properties of maximal persistence and maximal connectedness, reducing the required resources of one-way quantum computing protocols.
DNS of Flow in a Low-Pressure Turbine Cascade Using a Discontinuous-Galerkin Spectral-Element Method
NASA Technical Reports Server (NTRS)
Garai, Anirban; Diosady, Laslo Tibor; Murman, Scott; Madavan, Nateri
2015-01-01
A new computational capability under development for accurate and efficient high-fidelity direct numerical simulation (DNS) and large eddy simulation (LES) of turbomachinery is described. This capability is based on an entropy-stable Discontinuous-Galerkin spectral-element approach that extends to arbitrarily high orders of spatial and temporal accuracy and is implemented in a computationally efficient manner on a modern high performance computer architecture. A validation study using this method to perform DNS of flow in a low-pressure turbine airfoil cascade are presented. Preliminary results indicate that the method captures the main features of the flow. Discrepancies between the predicted results and the experiments are likely due to the effects of freestream turbulence not being included in the simulation and will be addressed in the final paper.
Mixed-Fidelity Approach for Design of Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Li, Wu; Shields, Elwood; Geiselhart, Karl
2011-01-01
This paper documents a mixed-fidelity approach for the design of low-boom supersonic aircraft with a focus on fuselage shaping.A low-boom configuration that is based on low-fidelity analysis is used as the baseline. The fuselage shape is modified iteratively to obtain a configuration with an equivalent-area distribution derived from computational fluid dynamics analysis that attempts to match a predetermined low-boom target area distribution and also yields a low-boom ground signature. The ground signature of the final configuration is calculated by using a state-of-the-art computational-fluid-dynamics-based boom analysis method that generates accurate midfield pressure distributions for propagation to the ground with ray tracing. The ground signature that is propagated from a midfield pressure distribution has a shaped ramp front, which is similar to the ground signature that is propagated from the computational fluid dynamics equivalent-area distribution. This result supports the validity of low-boom supersonic configuration design by matching a low-boom equivalent-area target, which is easier to accomplish than matching a low-boom midfield pressure target.
Integrating CFD, CAA, and Experiments Towards Benchmark Datasets for Airframe Noise Problems
NASA Technical Reports Server (NTRS)
Choudhari, Meelan M.; Yamamoto, Kazuomi
2012-01-01
Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. The combination of geometric complexity, high Reynolds number turbulence, multiple regions of separation, and a strong coupling with adjacent physical components makes the problem of airframe noise highly challenging. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate state of the art computational fluid dynamics, computational aeroacoustics, and in depth, holistic, and multifacility measurements targeting a selected set of canonical yet realistic configurations. This paper provides a brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far.
High Resolution Aerospace Applications using the NASA Columbia Supercomputer
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Aftosmis, Michael J.; Berger, Marsha
2005-01-01
This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages. are industrial-level codes designed for complex geometry and incorpor.ats. CuStomized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAIink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.
Fidelity decay of the two-level bosonic embedded ensembles of random matrices
NASA Astrophysics Data System (ADS)
Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.
2010-12-01
We study the fidelity decay of the k-body embedded ensembles of random matrices for bosons distributed over two single-particle states. Fidelity is defined in terms of a reference Hamiltonian, which is a purely diagonal matrix consisting of a fixed one-body term and includes the diagonal of the perturbing k-body embedded ensemble matrix, and the perturbed Hamiltonian which includes the residual off-diagonal elements of the k-body interaction. This choice mimics the typical mean-field basis used in many calculations. We study separately the cases k = 2 and 3. We compute the ensemble-averaged fidelity decay as well as the fidelity of typical members with respect to an initial random state. Average fidelity displays a revival at the Heisenberg time, t = tH = 1, and a freeze in the fidelity decay, during which periodic revivals of period tH are observed. We obtain the relevant scaling properties with respect to the number of bosons and the strength of the perturbation. For certain members of the ensemble, we find that the period of the revivals during the freeze of fidelity occurs at fractional times of tH. These fractional periodic revivals are related to the dominance of specific k-body terms in the perturbation.
Optimal quantum control of multimode couplings between trapped ion qubits for scalable entanglement.
Choi, T; Debnath, S; Manning, T A; Figgatt, C; Gong, Z-X; Duan, L-M; Monroe, C
2014-05-16
We demonstrate entangling quantum gates within a chain of five trapped ion qubits by optimally shaping optical fields that couple to multiple collective modes of motion. We individually address qubits with segmented optical pulses to construct multipartite entangled states in a programmable way. This approach enables high-fidelity gates that can be scaled to larger qubit registers for quantum computation and simulation.
Eastern Renewable Generation Integration Study: Redefining What’s Possible for Renewable Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloom, Aaron
NREL project manager Aaron Bloom introduces NREL’s Eastern Renewable Generation Integration Study (ERGIS) and high-performance computing capabilities and new methodologies that allowed NREL to model operations of the Eastern Interconnection at unprecedented fidelity. ERGIS shows that the Eastern Interconnection can balance the variability and uncertainty of wind and solar photovoltaics at a 5-minute level, for one simulated year.
Curran, Vernon; Fleet, Lisa; White, Susan; Bessell, Clare; Deshpandey, Akhil; Drover, Anne; Hayward, Mark; Valcour, James
2015-03-01
The neonatal resuscitation program (NRP) has been developed to educate physicians and other health care providers about newborn resuscitation and has been shown to improve neonatal resuscitation skills. Simulation-based training is recommended as an effective modality for instructing neonatal resuscitation and both low and high-fidelity manikin simulators are used. There is limited research that has compared the effect of low and high-fidelity manikin simulators for NRP learning outcomes, and more specifically on teamwork performance and confidence. The purpose of this study was to examine the effect of using low versus high-fidelity manikin simulators in NRP instruction. A randomized posttest-only control group study design was conducted. Third year undergraduate medical students participated in NRP instruction and were assigned to an experimental group (high-fidelity manikin simulator) or control group (low-fidelity manikin simulator). Integrated skills station (megacode) performance, participant satisfaction, confidence and teamwork behaviour scores were compared between the study groups. Participants in the high-fidelity manikin simulator instructional group reported significantly higher total scores in overall satisfaction (p = 0.001) and confidence (p = 0.001). There were no significant differences in teamwork behaviour scores, as observed by two independent raters, nor differences on mandatory integrated skills station performance items at the p < 0.05 level. Medical students' reported greater satisfaction and confidence with high-fidelity manikin simulators, but did not demonstrate overall significantly improved teamwork or integrated skills station performance. Low and high-fidelity manikin simulators facilitate similar levels of objectively measured NRP outcomes for integrated skills station and teamwork performance.
NASA Astrophysics Data System (ADS)
Sayre, George Anthony
The purpose of this dissertation was to develop the C ++ program Emergency Dose to calculate transport of radionuclides through indoor spaces using intermediate fidelity physics that provides improved spatial heterogeneity over well-mixed models such as MELCORRTM and much lower computation times than CFD codes such as FLUENTRTM . Modified potential flow theory, which is an original formulation of potential flow theory with additions of turbulent jet and natural convection approximations, calculates spatially heterogeneous velocity fields that well-mixed models cannot predict. Other original contributions of MPFT are: (1) generation of high fidelity boundary conditions relative to well-mixed-CFD coupling methods (conflation), (2) broadening of potential flow applications to arbitrary indoor spaces previously restricted to specific applications such as exhaust hood studies, and (3) great reduction of computation time relative to CFD codes without total loss of heterogeneity. Additionally, the Lagrangian transport module, which is discussed in Sections 1.3 and 2.4, showcases an ensemble-based formulation thought to be original to interior studies. Velocity and concentration transport benchmarks against analogous formulations in COMSOLRTM produced favorable results with discrepancies resulting from the tetrahedral meshing used in COMSOLRTM outperforming the Cartesian method used by Emergency Dose. A performance comparison of the concentration transport modules against MELCORRTM showed that Emergency Dose held advantages over the well-mixed model especially in scenarios with many interior partitions and varied source positions. A performance comparison of velocity module against FLUENTRTM showed that viscous drag provided the largest error between Emergency Dose and CFD velocity calculations, but that Emergency Dose's turbulent jets well approximated the corresponding CFD jets. Overall, Emergency Dose was found to provide a viable intermediate solution method for concentration transport with relatively low computation times.
System Risk Assessment and Allocation in Conceptual Design
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Smith, Natasha L.; Zang, Thomas A. (Technical Monitor)
2003-01-01
As aerospace systems continue to evolve in addressing newer challenges in air and space transportation, there exists a heightened priority for significant improvement in system performance, cost effectiveness, reliability, and safety. Tools, which synthesize multidisciplinary integration, probabilistic analysis, and optimization, are needed to facilitate design decisions allowing trade-offs between cost and reliability. This study investigates tools for probabilistic analysis and probabilistic optimization in the multidisciplinary design of aerospace systems. A probabilistic optimization methodology is demonstrated for the low-fidelity design of a reusable launch vehicle at two levels, a global geometry design and a local tank design. Probabilistic analysis is performed on a high fidelity analysis of a Navy missile system. Furthermore, decoupling strategies are introduced to reduce the computational effort required for multidisciplinary systems with feedback coupling.
Tunable, Flexible, and Efficient Optimization of Control Pulses for Practical Qubits
NASA Astrophysics Data System (ADS)
Machnes, Shai; Assémat, Elie; Tannor, David; Wilhelm, Frank K.
2018-04-01
Quantum computation places very stringent demands on gate fidelities, and experimental implementations require both the controls and the resultant dynamics to conform to hardware-specific constraints. Superconducting qubits present the additional requirement that pulses must have simple parameterizations, so they can be further calibrated in the experiment, to compensate for uncertainties in system parameters. Other quantum technologies, such as sensing, require extremely high fidelities. We present a novel, conceptually simple and easy-to-implement gradient-based optimal control technique named gradient optimization of analytic controls (GOAT), which satisfies all the above requirements, unlike previous approaches. To demonstrate GOAT's capabilities, with emphasis on flexibility and ease of subsequent calibration, we optimize fast coherence-limited pulses for two leading superconducting qubits architectures—flux-tunable transmons and fixed-frequency transmons with tunable couplers.
High Fidelity System Simulation of Multiple Components in Support of the UEET Program
NASA Technical Reports Server (NTRS)
Plybon, Ronald C.; VanDeWall, Allan; Sampath, Rajiv; Balasubramaniam, Mahadevan; Mallina, Ramakrishna; Irani, Rohinton
2006-01-01
The High Fidelity System Simulation effort has addressed various important objectives to enable additional capability within the NPSS framework. The scope emphasized High Pressure Turbine and High Pressure Compressor components. Initial effort was directed at developing and validating intermediate fidelity NPSS model using PD geometry and extended to high-fidelity NPSS model by overlaying detailed geometry to validate CFD against rig data. Both "feedforward" and feedback" approaches of analysis zooming was employed to enable system simulation capability in NPSS. These approaches have certain benefits and applicability in terms of specific applications "feedback" zooming allows the flow-up of information from high-fidelity analysis to be used to update the NPSS model results by forcing the NPSS solver to converge to high-fidelity analysis predictions. This apporach is effective in improving the accuracy of the NPSS model; however, it can only be used in circumstances where there is a clear physics-based strategy to flow up the high-fidelity analysis results to update the NPSS system model. "Feed-forward" zooming approach is more broadly useful in terms of enabling detailed analysis at early stages of design for a specified set of critical operating points and using these analysis results to drive design decisions early in the development process.
Adiabatic Quantum Computing via the Rydberg Blockade
NASA Astrophysics Data System (ADS)
Keating, Tyler; Goyal, Krittika; Deutsch, Ivan
2012-06-01
We study an architecture for implementing adiabatic quantum computation with trapped neutral atoms. Ground state atoms are dressed by laser fields in a manner conditional on the Rydberg blockade mechanism, thereby providing the requisite entangling interactions. As a benchmark we study the performance of a Quadratic Unconstrained Binary Optimization (QUBO) problem whose solution is found in the ground state spin configuration of an Ising-like model. We model a realistic architecture, including the effects of magnetic level structure, with qubits encoded into the clock states of ^133Cs, effective B-fields implemented through microwaves and light shifts, and atom-atom coupling achieved by excitation to a high-lying Rydberg level. Including the fundamental effects of photon scattering we find a high fidelity for the two-qubit implementation.
Improving the gate fidelity of capacitively coupled spin qubits
NASA Astrophysics Data System (ADS)
Wang, Xin; Barnes, Edwin
2015-03-01
Precise execution of quantum gates acting on two or multiple qubits is essential to quantum computation. For semiconductor spin qubits coupled via capacitive interaction, the best fidelity for a two-qubit gate demonstrated so far is around 70%, insufficient for fault-tolerant quantum computation. In this talk we present control protocols that may substantially improve the robustness of two-qubit gates against both nuclear noise and charge noise. Our pulse sequences incorporate simultaneous dynamical decoupling protocols and are simple enough for immediate experimental realization. Together with existing control protocols for single-qubit gates, our results constitute an important step toward scalable quantum computation using spin qubits. This work is done in collaboration with Sankar Das Sarma and supported by LPS-NSA-CMTC and IARPA-MQCO.
Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.
Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2017-05-01
Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.
NASA Technical Reports Server (NTRS)
Hewett, Marle D.; Tartt, David M.; Duke, Eugene L.; Antoniewicz, Robert F.; Brumbaugh, Randal W.
1988-01-01
The development of an automated flight test management system (ATMS) as a component of a rapid-prototyping flight research facility for AI-based flight systems concepts is described. The rapid-prototyping facility includes real-time high-fidelity simulators, numeric and symbolic processors, and high-performance research aircraft modified to accept commands for a ground-based remotely augmented vehicle facility. The flight system configuration of the ATMS includes three computers: the TI explorer LX and two GOULD SEL 32/27s.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2016-10-01
Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.
NASA Technical Reports Server (NTRS)
Mizukami, M.; Saunders, J. D.
1995-01-01
The supersonic diffuser of a Mach 2.68 bifurcated, rectangular, mixed-compression inlet was analyzed using a two-dimensional (2D) Navier-Stokes flow solver. Parametric studies were performed on turbulence models, computational grids and bleed models. The computer flowfield was substantially different from the original inviscid design, due to interactions of shocks, boundary layers, and bleed. Good agreement with experimental data was obtained in many aspects. Many of the discrepancies were thought to originate primarily from 3D effects. Therefore, a balance should be struck between expending resources on a high fidelity 2D simulation, and the inherent limitations of 2D analysis. The solutions were fairly insensitive to turbulence models, grids and bleed models. Overall, the k-e turbulence model, and the bleed models based on unchoked bleed hole discharge coefficients or uniform velocity are recommended. The 2D Navier-Stokes methods appear to be a useful tool for the design and analysis of supersonic inlets, by providing a higher fidelity simulation of the inlet flowfield than inviscid methods, in a reasonable turnaround time.
The Need for High Fidelity Lunar Regolith Simulants
NASA Technical Reports Server (NTRS)
Gaier, James R.
2008-01-01
The case is made for the need to have high fidelity lunar regolith simulants to verify the performance of structures, mechanisms, and processes to be used on the lunar surface. Minor constituents will in some cases have major consequences. Small amounts of sulfur in the regolith can poison catalysts, and metallic iron on the surface of nano-sized dust particles may cause a dramatic increase in its toxicity. So the definition of a high fidelity simulant is application-dependent. For example, in situ resource utilization will require high fidelity in chemistry, meaning careful attention to the minor components and phases; but some other applications, such as the abrasive effects on suit fabrics, might be relatively insensitive to minor component chemistry while abrasion of some metal components may be highly dependent on trace components. The lunar environment itself will change the surface chemistry of the simulant, so to have a high fidelity simulant it must be used in a high fidelity simulated environment to get an accurate simulation. Research must be conducted to determine how sensitive technologies will be to minor components and environmental factors before they can be dismissed as unimportant.
Ultrascale Visualization of Climate Data
NASA Technical Reports Server (NTRS)
Williams, Dean N.; Bremer, Timo; Doutriaux, Charles; Patchett, John; Williams, Sean; Shipman, Galen; Miller, Ross; Pugmire, David R.; Smith, Brian; Steed, Chad;
2013-01-01
Fueled by exponential increases in the computational and storage capabilities of high-performance computing platforms, climate simulations are evolving toward higher numerical fidelity, complexity, volume, and dimensionality. These technological breakthroughs are coming at a time of exponential growth in climate data, with estimates of hundreds of exabytes by 2020. To meet the challenges and exploit the opportunities that such explosive growth affords, a consortium of four national laboratories, two universities, a government agency, and two private companies formed to explore the next wave in climate science. Working in close collaboration with domain experts, the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) project aims to provide high-level solutions to a variety of climate data analysis and visualization problems.
Implementation of quantum logic gates using polar molecules in pendular states.
Zhu, Jing; Kais, Sabre; Wei, Qi; Herschbach, Dudley; Friedrich, Bretislav
2013-01-14
We present a systematic approach to implementation of basic quantum logic gates operating on polar molecules in pendular states as qubits for a quantum computer. A static electric field prevents quenching of the dipole moments by rotation, thereby creating the pendular states; also, the field gradient enables distinguishing among qubit sites. Multi-target optimal control theory is used as a means of optimizing the initial-to-target transition probability via a laser field. We give detailed calculations for the SrO molecule, a favorite candidate for proposed quantum computers. Our simulation results indicate that NOT, Hadamard and CNOT gates can be realized with high fidelity, as high as 0.985, for such pendular qubit states.
Gravity Modeling for Variable Fidelity Environments
NASA Technical Reports Server (NTRS)
Madden, Michael M.
2006-01-01
Aerospace simulations can model worlds, such as the Earth, with differing levels of fidelity. The simulation may represent the world as a plane, a sphere, an ellipsoid, or a high-order closed surface. The world may or may not rotate. The user may select lower fidelity models based on computational limits, a need for simplified analysis, or comparison to other data. However, the user will also wish to retain a close semblance of behavior to the real world. The effects of gravity on objects are an important component of modeling real-world behavior. Engineers generally equate the term gravity with the observed free-fall acceleration. However, free-fall acceleration is not equal to all observers. To observers on the sur-face of a rotating world, free-fall acceleration is the sum of gravitational attraction and the centrifugal acceleration due to the world's rotation. On the other hand, free-fall acceleration equals gravitational attraction to an observer in inertial space. Surface-observed simulations (e.g. aircraft), which use non-rotating world models, may choose to model observed free fall acceleration as the gravity term; such a model actually combines gravitational at-traction with centrifugal acceleration due to the Earth s rotation. However, this modeling choice invites confusion as one evolves the simulation to higher fidelity world models or adds inertial observers. Care must be taken to model gravity in concert with the world model to avoid denigrating the fidelity of modeling observed free fall. The paper will go into greater depth on gravity modeling and the physical disparities and synergies that arise when coupling specific gravity models with world models.
A ``Cyber Wind Facility'' for HPC Wind Turbine Field Experiments
NASA Astrophysics Data System (ADS)
Brasseur, James; Paterson, Eric; Schmitz, Sven; Campbell, Robert; Vijayakumar, Ganesh; Lavely, Adam; Jayaraman, Balaji; Nandi, Tarak; Jha, Pankaj; Dunbar, Alex; Motta-Mena, Javier; Craven, Brent; Haupt, Sue
2013-03-01
The Penn State ``Cyber Wind Facility'' (CWF) is a high-fidelity multi-scale high performance computing (HPC) environment in which ``cyber field experiments'' are designed and ``cyber data'' collected from wind turbines operating within the atmospheric boundary layer (ABL) environment. Conceptually the ``facility'' is akin to a high-tech wind tunnel with controlled physical environment, but unlike a wind tunnel it replicates commercial-scale wind turbines operating in the field and forced by true atmospheric turbulence with controlled stability state. The CWF is created from state-of-the-art high-accuracy technology geometry and grid design and numerical methods, and with high-resolution simulation strategies that blend unsteady RANS near the surface with high fidelity large-eddy simulation (LES) in separated boundary layer, blade and rotor wake regions, embedded within high-resolution LES of the ABL. CWF experiments complement physical field facility experiments that can capture wider ranges of meteorological events, but with minimal control over the environment and with very small numbers of sensors at low spatial resolution. I shall report on the first CWF experiments aimed at dynamical interactions between ABL turbulence and space-time wind turbine loadings. Supported by DOE and NSF.
Aeroelastic optimization methodology for viscous and turbulent flows
NASA Astrophysics Data System (ADS)
Barcelos Junior, Manuel Nascimento Dias
2007-12-01
In recent years, the development of faster computers and parallel processing allowed the application of high-fidelity analysis methods to the aeroelastic design of aircraft. However, these methods are restricted to the final design verification, mainly due to the computational cost involved in iterative design processes. Therefore, this work is concerned with the creation of a robust and efficient aeroelastic optimization methodology for inviscid, viscous and turbulent flows by using high-fidelity analysis and sensitivity analysis techniques. Most of the research in aeroelastic optimization, for practical reasons, treat the aeroelastic system as a quasi-static inviscid problem. In this work, as a first step toward the creation of a more complete aeroelastic optimization methodology for realistic problems, an analytical sensitivity computation technique was developed and tested for quasi-static aeroelastic viscous and turbulent flow configurations. Viscous and turbulent effects are included by using an averaged discretization of the Navier-Stokes equations, coupled with an eddy viscosity turbulence model. For quasi-static aeroelastic problems, the traditional staggered solution strategy has unsatisfactory performance when applied to cases where there is a strong fluid-structure coupling. Consequently, this work also proposes a solution methodology for aeroelastic and sensitivity analyses of quasi-static problems, which is based on the fixed point of an iterative nonlinear block Gauss-Seidel scheme. The methodology can also be interpreted as the solution of the Schur complement of the aeroelastic and sensitivity analyses linearized systems of equations. The methodologies developed in this work are tested and verified by using realistic aeroelastic systems.
Automated error correction in IBM quantum computer and explicit generalization
NASA Astrophysics Data System (ADS)
Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.
2018-06-01
Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.
Eastern Renewable Generation Integration Study: Redefining Whatâs Possible for Renewable Energy
Bloom, Aaron
2018-01-16
NREL project manager Aaron Bloom introduces NRELâs Eastern Renewable Generation Integration Study (ERGIS) and high-performance computing capabilities and new methodologies that allowed NREL to model operations of the Eastern Interconnection at unprecedented fidelity. ERGIS shows that the Eastern Interconnection can balance the variability and uncertainty of wind and solar photovoltaics at a 5-minute level, for one simulated year.
R&D100: Lightweight Distributed Metric Service
Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike
2018-06-12
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
R&D100: Lightweight Distributed Metric Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentile, Ann; Brandt, Jim; Tucker, Tom
2015-11-19
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
NASA Astrophysics Data System (ADS)
Yu, Long-Bao; Zhang, Wen-Hai; Ye, Liu
2007-09-01
We propose a simple scheme to realize 1→M economical phase-covariant quantum cloning machine (EPQCM) with superconducting quantum interference device (SQUID) qubits. In our scheme, multi-SQUIDs are fixed into a microwave cavity by adiabatic passage for their manipulation. Based on this model, we can realize the EPQCM with high fidelity via adiabatic quantum computation.
What Was Learned in Predicting Slender Airframe Aerodynamics with the F-16XL Aircraft
NASA Technical Reports Server (NTRS)
Rizzi, Arthur; Luckring, James M.
2016-01-01
The second Cranked-Arrow Wing Aerodynamics Project, International, coordinated project has been underway to improve high-fidelity computational-fluid-dynamics predictions of slender airframe aerodynamics. The work is focused on two flow conditions and leverages a unique flight data set obtained with the F-16XL aircraft for comparison and validation. These conditions, a low-speed high-angle-of-attack case and a transonic low-angle-of-attack case, were selected from a prior prediction campaign wherein the computational fluid dynamics failed to provide acceptable results. In revisiting these two cases, approaches for improved results include better, denser grids using more grid adaptation to local flow features as well as unsteady higher-fidelity physical modeling like hybrid Reynolds-averaged Navier-Stokes/unsteady Reynolds-averaged Navier-Stokes/large-eddy simulation methods. The work embodies predictions from multiple numerical formulations that are contributed from multiple organizations where some authors investigate other possible factors that could explain the discrepancies in agreement (e.g., effects due to deflected control surfaces during the flight tests as well as static aeroelastic deflection of the outer wing). This paper presents the synthesis of all the results and findings and draws some conclusions that lead to an improved understanding of the underlying flow physics, finally making the connections between the physics and aircraft features.
Scalable quantum computation scheme based on quantum-actuated nuclear-spin decoherence-free qubits
NASA Astrophysics Data System (ADS)
Dong, Lihong; Rong, Xing; Geng, Jianpei; Shi, Fazhan; Li, Zhaokai; Duan, Changkui; Du, Jiangfeng
2017-11-01
We propose a novel theoretical scheme of quantum computation. Nuclear spin pairs are utilized to encode decoherence-free (DF) qubits. A nitrogen-vacancy center serves as a quantum actuator to initialize, readout, and quantum control the DF qubits. The realization of CNOT gates between two DF qubits are also presented. Numerical simulations show high fidelities of all these processes. Additionally, we discuss the potential of scalability. Our scheme reduces the challenge of classical interfaces from controlling and observing complex quantum systems down to a simple quantum actuator. It also provides a novel way to handle complex quantum systems.
Advanced computer techniques for inverse modeling of electric current in cardiac tissue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, S.A.; Romero, L.A.; Diegert, C.F.
1996-08-01
For many years, ECG`s and vector cardiograms have been the tools of choice for non-invasive diagnosis of cardiac conduction problems, such as found in reentrant tachycardia or Wolff-Parkinson-White (WPW) syndrome. Through skillful analysis of these skin-surface measurements of cardiac generated electric currents, a physician can deduce the general location of heart conduction irregularities. Using a combination of high-fidelity geometry modeling, advanced mathematical algorithms and massively parallel computing, Sandia`s approach would provide much more accurate information and thus allow the physician to pinpoint the source of an arrhythmia or abnormal conduction pathway.
A Planar Quasi-Static Constraint Mode Tire Model
2015-07-10
strikes a balance between simple tire models that lack the fidelity to make accurate chassis load predictions and computationally intensive models that...strikes a balance between heuristic tire models (such as a linear point-follower) that lack the fidelity to make accurate chassis load predictions...UNCLASSIFIED: Distribution Statement A. Cleared for public release A PLANAR QUASI-STATIC CONSTRAINT MODE TIRE MODEL Rui Maa John B. Ferris
Real-Time Simulation of Ares I Launch Vehicle
NASA Technical Reports Server (NTRS)
Tobbe, Patrick; Matras, Alex; Wilson, Heath; Alday, Nathan; Walker, David; Betts, Kevin; Hughes, Ryan; Turbe, Michael
2009-01-01
The Ares Real-Time Environment for Modeling, Integration, and Simulation (ARTEMIS) has been developed for use by the Ares I launch vehicle System Integration Laboratory (SIL) at the Marshall Space Flight Center (MSFC). The primary purpose of the Ares SIL is to test the vehicle avionics hardware and software in a hardware-in-the-loop (HWIL) environment to certify that the integrated system is prepared for flight. ARTEMIS has been designed to be the real-time software backbone to stimulate all required Ares components through high-fidelity simulation. ARTEMIS has been designed to take full advantage of the advances in underlying computational power now available to support HWIL testing. A modular real-time design relying on a fully distributed computing architecture has been achieved. Two fundamental requirements drove ARTEMIS to pursue the use of high-fidelity simulation models in a real-time environment. First, ARTEMIS must be used to test a man-rated integrated avionics hardware and software system, thus requiring a wide variety of nominal and off-nominal simulation capabilities to certify system robustness. The second driving requirement - derived from a nationwide review of current state-of-the-art HWIL facilities - was that preserving digital model fidelity significantly reduced overall vehicle lifecycle cost by reducing testing time for certification runs and increasing flight tempo through an expanded operational envelope. These two driving requirements necessitated the use of high-fidelity models throughout the ARTEMIS simulation. The nature of the Ares mission profile imposed a variety of additional requirements on the ARTEMIS simulation. The Ares I vehicle is composed of multiple elements, including the First Stage Solid Rocket Booster (SRB), the Upper Stage powered by the J- 2X engine, the Orion Crew Exploration Vehicle (CEV) which houses the crew, the Launch Abort System (LAS), and various secondary elements that separate from the vehicle. At launch, the integrated vehicle stack is composed of these stages, and throughout the mission, various elements separate from the integrated stack and tumble back towards the earth. ARTEMIS must be capable of simulating the integrated stack through the flight as well as propagating each individual element after separation. In addition, abort sequences can lead to other unique configurations of the integrated stack as the timing and sequence of the stage separations are altered.
NASA Astrophysics Data System (ADS)
Margheri, Luca; Sagaut, Pierre
2016-11-01
To significantly increase the contribution of numerical computational fluid dynamics (CFD) simulation for risk assessment and decision making, it is important to quantitatively measure the impact of uncertainties to assess the reliability and robustness of the results. As unsteady high-fidelity CFD simulations are becoming the standard for industrial applications, reducing the number of required samples to perform sensitivity (SA) and uncertainty quantification (UQ) analysis is an actual engineering challenge. The novel approach presented in this paper is based on an efficient hybridization between the anchored-ANOVA and the POD/Kriging methods, which have already been used in CFD-UQ realistic applications, and the definition of best practices to achieve global accuracy. The anchored-ANOVA method is used to efficiently reduce the UQ dimension space, while the POD/Kriging is used to smooth and interpolate each anchored-ANOVA term. The main advantages of the proposed method are illustrated through four applications with increasing complexity, most of them based on Large-Eddy Simulation as a high-fidelity CFD tool: the turbulent channel flow, the flow around an isolated bluff-body, a pedestrian wind comfort study in a full scale urban area and an application to toxic gas dispersion in a full scale city area. The proposed c-APK method (anchored-ANOVA-POD/Kriging) inherits the advantages of each key element: interpolation through POD/Kriging precludes the use of quadrature schemes therefore allowing for a more flexible sampling strategy while the ANOVA decomposition allows for a better domain exploration. A comparison of the three methods is given for each application. In addition, the importance of adding flexibility to the control parameters and the choice of the quantity of interest (QoI) are discussed. As a result, global accuracy can be achieved with a reasonable number of samples allowing computationally expensive CFD-UQ analysis.
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
NASA Technical Reports Server (NTRS)
Fijany, A.; Roberts, J. A.; Jain, A.; Man, G. K.
1993-01-01
Part 1 of this paper presented the requirements for the real-time simulation of Cassini spacecraft along with some discussion of the DARTS algorithm. Here, in Part 2 we discuss the development and implementation of parallel/vectorized DARTS algorithm and architecture for real-time simulation. Development of the fast algorithms and architecture for real-time hardware-in-the-loop simulation of spacecraft dynamics is motivated by the fact that it represents a hard real-time problem, in the sense that the correctness of the simulation depends on both the numerical accuracy and the exact timing of the computation. For a given model fidelity, the computation should be computed within a predefined time period. Further reduction in computation time allows increasing the fidelity of the model (i.e., inclusion of more flexible modes) and the integration routine.
Creating NDA working standards through high-fidelity spent fuel modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skutnik, Steven E; Gauld, Ian C; Romano, Catherine E
2012-01-01
The Next Generation Safeguards Initiative (NGSI) is developing advanced non-destructive assay (NDA) techniques for spent nuclear fuel assemblies to advance the state-of-the-art in safeguards measurements. These measurements aim beyond the capabilities of existing methods to include the evaluation of plutonium and fissile material inventory, independent of operator declarations. Testing and evaluation of advanced NDA performance will require reference assemblies with well-characterized compositions to serve as working standards against which the NDA methods can be benchmarked and for uncertainty quantification. To support the development of standards for the NGSI spent fuel NDA project, high-fidelity modeling of irradiated fuel assemblies is beingmore » performed to characterize fuel compositions and radiation emission data. The assembly depletion simulations apply detailed operating history information and core simulation data as it is available to perform high fidelity axial and pin-by-pin fuel characterization for more than 1600 nuclides. The resulting pin-by-pin isotopic inventories are used to optimize the NDA measurements and provide information necessary to unfold and interpret the measurement data, e.g., passive gamma emitters, neutron emitters, neutron absorbers, and fissile content. A key requirement of this study is the analysis of uncertainties associated with the calculated compositions and signatures for the standard assemblies; uncertainties introduced by the calculation methods, nuclear data, and operating information. An integral part of this assessment involves the application of experimental data from destructive radiochemical assay to assess the uncertainty and bias in computed inventories, the impact of parameters such as assembly burnup gradients and burnable poisons, and the influence of neighboring assemblies on periphery rods. This paper will present the results of high fidelity assembly depletion modeling and uncertainty analysis from independent calculations performed using SCALE and MCNP. This work is supported by the Next Generation Safeguards Initiative, Office of Nuclear Safeguards and Security, National Nuclear Security Administration.« less
NASA Astrophysics Data System (ADS)
Beterov, I. I.; Hamzina, G. N.; Yakshina, E. A.; Tretyakov, D. B.; Entin, V. M.; Ryabtsev, I. I.
2018-03-01
High-fidelity entangled Bell states are of great interest in quantum physics. Entanglement of ultracold neutral atoms in two spatially separated optical dipole traps is promising for implementation of quantum computing and quantum simulation and for investigation of Bell states of material objects. We propose a method to entangle two atoms via long-range Rydberg-Rydberg interaction. Alternative to previous approaches, based on Rydberg blockade, we consider radio-frequency-assisted Stark-tuned Förster resonances in Rb Rydberg atoms. To reduce the sensitivity of the fidelity of Bell states to the fluctuations of interatomic distance, we propose to use the double adiabatic passage across the radio-frequency-assisted Stark-tuned Förster resonances, which results in a deterministic phase shift of the collective two-atom state.
Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition
NASA Astrophysics Data System (ADS)
Ilbeigi, Shahab; Chelidze, David
2017-11-01
Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.
Simulation in a dynamic prototyping environment: Petri nets or rules?
NASA Technical Reports Server (NTRS)
Moore, Loretta A.; Price, Shannon W.; Hale, Joseph P.
1994-01-01
An evaluation of a prototyped user interface is best supported by a simulation of the system. A simulation allows for dynamic evaluation of the interface rather than just a static evaluation of the screen's appearance. This allows potential users to evaluate both the look (in terms of the screen layout, color, objects, etc.) and feel (in terms of operations and actions which need to be performed) of a system's interface. Because of the need to provide dynamic evaluation of an interface, there must be support for producing active simulations. The high-fidelity training simulators are normally delivered too late to be effectively used in prototyping the displays. Therefore, it is important to build a low fidelity simulator, so that the iterative cycle of refining the human computer interface based upon a user's interactions can proceed early in software development.
Time-optimal control with finite bandwidth
NASA Astrophysics Data System (ADS)
Hirose, M.; Cappellaro, P.
2018-04-01
Time-optimal control theory provides recipes to achieve quantum operations with high fidelity and speed, as required in quantum technologies such as quantum sensing and computation. While technical advances have achieved the ultrastrong driving regime in many physical systems, these capabilities have yet to be fully exploited for the precise control of quantum systems, as other limitations, such as the generation of higher harmonics or the finite response time of the control apparatus, prevent the implementation of theoretical time-optimal control. Here we present a method to achieve time-optimal control of qubit systems that can take advantage of fast driving beyond the rotating wave approximation. We exploit results from time-optimal control theory to design driving protocols that can be implemented with realistic, finite-bandwidth control fields, and we find a relationship between bandwidth limitations and achievable control fidelity.
Design and characterization of integrated components for SiN photonic quantum circuits.
Poot, Menno; Schuck, Carsten; Ma, Xiao-Song; Guo, Xiang; Tang, Hong X
2016-04-04
The design, fabrication, and detailed calibration of essential building blocks towards fully integrated linear-optics quantum computation are discussed. Photonic devices are made from silicon nitride rib waveguides, where measurements on ring resonators show small propagation losses. Directional couplers are designed to be insensitive to fabrication variations. Their offset and coupling lengths are measured, as well as the phase difference between the transmitted and reflected light. With careful calibrations, the insertion loss of the directional couplers is found to be small. Finally, an integrated controlled-NOT circuit is characterized by measuring the transmission through different combinations of inputs and outputs. The gate fidelity for the CNOT operation with this circuit is estimated to be 99.81% after post selection. This high fidelity is due to our robust design, good fabrication reproducibility, and extensive characterizations.
Simulation in a dynamic prototyping environment: Petri nets or rules?
NASA Technical Reports Server (NTRS)
Moore, Loretta A.; Price, Shannon; Hale, Joseph P.
1994-01-01
An evaluation of a prototyped user interface is best supported by a simulation of the system. A simulation allows for dynamic evaluation of the interface rather than just a static evaluation of the screen's appearance. This allows potential users to evaluate both the look (in terms of the screen layout, color, objects, etc.) and feel (in terms of operations and actions which need to be performed) of a system's interface. Because of the need to provide dynamic evaluation of an interface, there must be support for producing active simulations. The high-fidelity training simulators are delivered too late to be effectively used in prototyping the displays. Therefore, it is important to build a low fidelity simulator, so that the iterative cycle of refining the human computer interface based upon a user's interactions can proceed early in software development.
High-fidelity simulation capability for virtual testing of seismic and acoustic sensors
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Moran, Mark L.; Ketcham, Stephen A.; Lacombe, James; Anderson, Thomas S.; Symons, Neill P.; Aldridge, David F.; Marlin, David H.; Collier, Sandra L.; Ostashev, Vladimir E.
2005-05-01
This paper describes development and application of a high-fidelity, seismic/acoustic simulation capability for battlefield sensors. The purpose is to provide simulated sensor data so realistic that they cannot be distinguished by experts from actual field data. This emerging capability provides rapid, low-cost trade studies of unattended ground sensor network configurations, data processing and fusion strategies, and signatures emitted by prototype vehicles. There are three essential components to the modeling: (1) detailed mechanical signature models for vehicles and walkers, (2) high-resolution characterization of the subsurface and atmospheric environments, and (3) state-of-the-art seismic/acoustic models for propagating moving-vehicle signatures through realistic, complex environments. With regard to the first of these components, dynamic models of wheeled and tracked vehicles have been developed to generate ground force inputs to seismic propagation models. Vehicle models range from simple, 2D representations to highly detailed, 3D representations of entire linked-track suspension systems. Similarly detailed models of acoustic emissions from vehicle engines are under development. The propagation calculations for both the seismics and acoustics are based on finite-difference, time-domain (FDTD) methodologies capable of handling complex environmental features such as heterogeneous geologies, urban structures, surface vegetation, and dynamic atmospheric turbulence. Any number of dynamic sources and virtual sensors may be incorporated into the FDTD model. The computational demands of 3D FDTD simulation over tactical distances require massively parallel computers. Several example calculations of seismic/acoustic wave propagation through complex atmospheric and terrain environments are shown.
Concept Maps: A Tool to Prepare for High Fidelity Simulation in Nursing
ERIC Educational Resources Information Center
Daley, Barbara J.; Beman, Sarah Black; Morgan, Sarah; Kennedy, Linda; Sheriff, Mandy
2017-01-01
In this study, the use of concept mapping as a method to prepare for high fidelity simulated learning experiences was investigated. Fourth year baccalaureate nursing students were taught how to use concept maps as a way to prepare for high fidelity simulated nursing experiences. Students prepared concept maps for two simulated experiences…
Evaluation of Drogue Parachute Damping Effects Utilizing the Apollo Legacy Parachute Model
NASA Technical Reports Server (NTRS)
Currin, Kelly M.; Gamble, Joe D.; Matz, Daniel A.; Bretz, David R.
2011-01-01
Drogue parachute damping is required to dampen the Orion Multi Purpose Crew Vehicle (MPCV) crew module (CM) oscillations prior to deployment of the main parachutes. During the Apollo program, drogue parachute damping was modeled on the premise that the drogue parachute force vector aligns with the resultant velocity of the parachute attach point on the CM. Equivalent Cm(sub q) and Cm(sub alpha) equations for drogue parachute damping resulting from the Apollo legacy parachute damping model premise have recently been developed. The MPCV computer simulations ANTARES and Osiris have implemented high fidelity two-body parachute damping models. However, high-fidelity model-based damping motion predictions do not match the damping observed during wind tunnel and full-scale free-flight oscillatory motion. This paper will present the methodology for comparing and contrasting the Apollo legacy parachute damping model with full-scale free-flight oscillatory motion. The analysis shows an agreement between the Apollo legacy parachute damping model and full-scale free-flight oscillatory motion.
Self-stabilized narrow-bandwidth and high-fidelity entangled photons generated from cold atoms
NASA Astrophysics Data System (ADS)
Yu, Y. C.; Ding, D. S.; Dong, M. X.; Shi, S.; Zhang, W.; Shi, B. S.
2018-04-01
Entangled photon pairs are critically important in fundamental quantum mechanics research as well as in many areas within the field of quantum information, such as quantum communication, quantum computation, and quantum cryptography. Previous demonstrations of entangled photons based on atomic ensembles were achieved by using a reference laser to stabilize the phase of two spontaneous four-wave mixing paths. Here, we demonstrate a convenient and efficient scheme to generate polarization-entangled photons with a narrow bandwidth of 57.2 ±1.6 MHz and a high-fidelity of 96.3 ±0.8 % by using a phase self-stabilized multiplexing system formed by two beam displacers and two half-wave plates where the relative phase between the different signal paths can be eliminated completely. It is possible to stabilize an entangled photon pair for a long time with this system and produce all four Bell states, making this a vital step forward in the field of quantum information.
Analyzing Dynamics of Cooperating Spacecraft
NASA Technical Reports Server (NTRS)
Hughes, Stephen P.; Folta, David C.; Conway, Darrel J.
2004-01-01
A software library has been developed to enable high-fidelity computational simulation of the dynamics of multiple spacecraft distributed over a region of outer space and acting with a common purpose. All of the modeling capabilities afforded by this software are available independently in other, separate software systems, but have not previously been brought together in a single system. A user can choose among several dynamical models, many high-fidelity environment models, and several numerical-integration schemes. The user can select whether to use models that assume weak coupling between spacecraft, or strong coupling in the case of feedback control or tethering of spacecraft to each other. For weak coupling, spacecraft orbits are propagated independently, and are synchronized in time by controlling the step size of the integration. For strong coupling, the orbits are integrated simultaneously. Among the integration schemes that the user can choose are Runge-Kutta Verner, Prince-Dormand, Adams-Bashforth-Moulton, and Bulirsh- Stoer. Comparisons of performance are included for both the weak- and strongcoupling dynamical models for all of the numerical integrators.
Optimized pulse shaping for trapped ion quantum computing
NASA Astrophysics Data System (ADS)
Manning, T.; Debnath, Shantanu; Choi, Taeyoung; Figgatt, Caroline; Monroe, Chris
2013-05-01
We perform entangling phase gates between pairs of qubits in a chain of trapped atomic ytterbium ions. Beat notes between frequency comb lines of a pulsed laser coherently drive Raman transitions that couple the hyperfine qubits to multiple collective transverse modes of motion. By optimizing the phase and amplitude of segmented laser pulses, we demonstrate a five-segment scheme to entangle two qubits with high fidelity over a range of detunings. We compare this special case of full control of spin-motion entanglement to a traditional single-segment gate. We extend this scheme to selectively entangle pairs of qubits in larger chains using individual optical addressing, where we couple to all the motional modes. We show how these robust gates can achieve high fidelities for practical gate times in an approach that scales realistically to much larger numbers of qubits. This work is supported by grants from the U.S. Army Research Office with funding from the DARPA OLE program, IARPA, and the MURI program; and the NSF Physics Frontier Center at JQI.
Maneval, Rhonda; Fowler, Kimberly A; Kays, John A; Boyd, Tiffany M; Shuey, Jennifer; Harne-Britner, Sarah; Mastrine, Cynthia
2012-03-01
This study was conducted to determine whether the addition of high-fidelity patient simulation to new nurse orientation enhanced critical thinking and clinical decision-making skills. A pretest-posttest design was used to assess critical thinking and clinical decision-making skills in two groups of graduate nurses. Compared with the control group, the high-fidelity patient simulation group did not show significant improvement in mean critical thinking or clinical decision-making scores. When mean scores were analyzed, both groups showed an increase in critical thinking scores from pretest to posttest, with the high-fidelity patient simulation group showing greater gains in overall scores. However, neither group showed a statistically significant increase in mean test scores. The effect of high-fidelity patient simulation on critical thinking and clinical decision-making skills remains unclear. Copyright 2012, SLACK Incorporated.
High-Fidelity Simulation for Neonatal Nursing Education: An Integrative Review of the Literature.
Cooper, Allyson
2015-01-01
The lack of safe avenues to develop neonatal nursing competencies using human subjects leads to the notion that simulation education for neonatal nurses might be an ideal form of education. This integrative literature review compares traditional, teacher-centered education with high-fidelity simulation education for neonatal nurses. It examines the theoretical frameworks used in neonatal nursing education and outlines the advantages of this type of training, including improving communication and teamwork; providing an innovative pedagogical approach; and aiding in skill acquisition, confidence, and participant satisfaction. The importance of debriefing is also examined. High-fidelity simulation is not without disadvantages, including its significant cost, the time associated with training, the need for very complex technical equipment, and increased faculty resource requirements. Innovative uses of high-fidelity simulation in neonatal nursing education are suggested. High-fidelity simulation has great potential but requires additional research to fully prove its efficacy.
NASA Technical Reports Server (NTRS)
Goodrich, Kenneth H.
1993-01-01
A batch air combat simulation environment, the tactical maneuvering simulator (TMS), is presented. The TMS is a tool for developing and evaluating tactical maneuvering logics, but it can also be used to evaluate the tactical implications of perturbations to aircraft performance or supporting systems. The TMS can simulate air combat between any number of engagement participants, with practical limits imposed by computer memory and processing power. Aircraft are modeled using equations of motion, control laws, aerodynamics, and propulsive characteristics equivalent to those used in high-fidelity piloted simulations. Data bases representative of a modern high-performance aircraft with and without thrust-vectoring capability are included. To simplify the task of developing and implementing maneuvering logics in the TMS, an outer-loop control system, the tactical autopilot (TA), is implemented in the aircraft simulation model. The TA converts guidance commands by computerized maneuvering logics from desired angle of attack and wind-axis bank-angle inputs to the inner loop control augmentation system of the aircraft. The capabilities and operation of the TMS and the TA are described.
Effects of channel tap spacing on delay-lock tracking
NASA Astrophysics Data System (ADS)
Dana, Roger A.; Milner, Brian R.; Bogusch, Robert L.
1995-12-01
High fidelity simulations of communication links operating through frequency selective fading channels require both accurate channel models and faithful reproduction of the received signal. In modern radio receivers, processing beyond the analog-to-digital converter (A/D) is done digitally, so a high fidelity simulation is actually an emulation of this digital signal processing. The 'simulation' occurs in constructing the output of the A/D. One approach to constructing the A/D output is to convolve the channel impulse response function with the combined impulse response of the transmitted modulation and the A/D. For both link simulations and hardware channel simulators, the channel impulse response function is then generated with a finite number of samples per chip, and the convolution is implemented in a tapped delay line. In this paper we discuss the effects of the channel model tap spacing on the performance of delay locked loops (DLLs) in both direct sequence and frequency hopped spread spectrum systems. A frequency selective fading channel is considered, and the channel impulse response function is constructed with an integer number of taps per modulation symbol or chip. The tracking loop time delay is computed theoretically for this tapped delay line channel model and is compared to the results of high fidelity simulations of actual DLLs. A surprising result is obtained. The performance of the DLL depends strongly on the number of taps per chip. As this number increases the DLL delay approaches the theoretical limit.
Unsupervised Learning Through Randomized Algorithms for High-Volume High-Velocity Data (ULTRA-HV).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinar, Ali; Kolda, Tamara G.; Carlberg, Kevin Thomas
Through long-term investments in computing, algorithms, facilities, and instrumentation, DOE is an established leader in massive-scale, high-fidelity simulations, as well as science-leading experimentation. In both cases, DOE is generating more data than it can analyze and the problem is intensifying quickly. The need for advanced algorithms that can automatically convert the abundance of data into a wealth of useful information by discovering hidden structures is well recognized. Such efforts however, are hindered by the massive volume of the data and its high velocity. Here, the challenge is developing unsupervised learning methods to discover hidden structure in high-volume, high-velocity data.
Chen, Yue; Fang, Zhao-Xiang; Ren, Yu-Xuan; Gong, Lei; Lu, Rong-De
2015-09-20
Optical vortices are associated with a spatial phase singularity. Such a beam with a vortex is valuable in optical microscopy, hyper-entanglement, and optical levitation. In these applications, vortex beams with a perfect circle shape and a large topological charge are highly desirable. But the generation of perfect vortices with high topological charges is challenging. We present a novel method to create perfect vortex beams with large topological charges using a digital micromirror device (DMD) through binary amplitude modulation and a narrow Gaussian approximation. The DMD with binary holograms encoding both the spatial amplitude and the phase could generate fast switchable, reconfigurable optical vortex beams with significantly high quality and fidelity. With either the binary Lee hologram or the superpixel binary encoding technique, we were able to generate the corresponding hologram with high fidelity and create a perfect vortex with topological charge as large as 90. The physical properties of the perfect vortex beam produced were characterized through measurements of propagation dynamics and the focusing fields. The measurements show good consistency with the theoretical simulation. The perfect vortex beam produced satisfies high-demand utilization in optical manipulation and control, momentum transfer, quantum computing, and biophotonics.
PHYSICS OF ECLIPSING BINARIES. II. TOWARD THE INCREASED MODEL FIDELITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prša, A.; Conroy, K. E.; Horvat, M.
The precision of photometric and spectroscopic observations has been systematically improved in the last decade, mostly thanks to space-borne photometric missions and ground-based spectrographs dedicated to finding exoplanets. The field of eclipsing binary stars strongly benefited from this development. Eclipsing binaries serve as critical tools for determining fundamental stellar properties (masses, radii, temperatures, and luminosities), yet the models are not capable of reproducing observed data well, either because of the missing physics or because of insufficient precision. This led to a predicament where radiative and dynamical effects, insofar buried in noise, started showing up routinely in the data, but weremore » not accounted for in the models. PHOEBE (PHysics Of Eclipsing BinariEs; http://phoebe-project.org) is an open source modeling code for computing theoretical light and radial velocity curves that addresses both problems by incorporating missing physics and by increasing the computational fidelity. In particular, we discuss triangulation as a superior surface discretization algorithm, meshing of rotating single stars, light travel time effects, advanced phase computation, volume conservation in eccentric orbits, and improved computation of local intensity across the stellar surfaces that includes the photon-weighted mode, the enhanced limb darkening treatment, the better reflection treatment, and Doppler boosting. Here we present the concepts on which PHOEBE is built and proofs of concept that demonstrate the increased model fidelity.« less
Li, Chen; Wang, Haiwei; Yuan, Tiangang; Woodman, Andrew; Yang, Decheng; Zhou, Guohui; Cameron, Craig E; Yu, Li
2018-05-01
Previous studies have shown that the FMDV Asia1/YS/CHA/05 high-fidelity mutagen-resistant variants are attenuated (Zeng et al., 2014). Here, we introduced the same single or multiple-amino-acid substitutions responsible for increased 3D pol fidelity of type Asia1 FMDV into the type O FMDV O/YS/CHA/05 infectious clone. The rescued viruses O-DA and O-DAMM are lower replication fidelity mutants and showed an attenuated phenotype. These results demonstrated that the same amino acid substitution of 3D pol in different serotypes of FMDV strains had different effects on viral fidelity. In addition, nucleoside analogues were used to select high-fidelity mutagen-resistant type O FMDV variants. The rescued mutagen-resistant type O FMDV high-fidelity variants exhibited significantly attenuated fitness and a reduced virulence phenotype. These results have important implications for understanding the molecular mechanism of FMDV evolution and pathogenicity, especially in developing a safer modified live-attenuated vaccine against FMDV. Copyright © 2018 Elsevier Inc. All rights reserved.
Modeling the Space Debris Environment with MASTER-2009 and ORDEM2010
NASA Technical Reports Server (NTRS)
Flegel, S.; Gelhaus, J.; Wiedemann, C.; Mockel, M.; Vorsmann, P.; Krisko, P.; Xu, Y. -L.; Horstman, M. F.; Opiela, J. N.; Matney, M.;
2010-01-01
Spacecraft analysis using ORDEM2010 uses a high-fidelity population model to compute risk to on-orbit assets. The ORDEM2010 GUI allows visualization of spacecraft flux in 2-D and 1-D. The population was produced using a Bayesian statistical approach with measured and modeled environment data. Validation of sizes < 1mm were performed using Shuttle window and radiator impact measurements. Validation of sizes > 1mm is on-going.
Experiments in Quantum Coherence and Computation With Single Cooper-Pair Electronics
2006-01-22
through the cavity. In the absence of damping, exact diagonalization of the Jaynes - Cumming Hamiltonian yields the excited eigenstates (dressed states...neglecting rapidly oscillating terms and omitting damping for the moment, Eq. (16) reduces to the Jaynes - Cummings Hamiltonian (1) with V=EJ /" and cou...is therefore little entanglement between the field and qubit in this situation and the rotation fidelity is high. To model the effect of the drive on
NASA's Pleiades Supercomputer Crunches Data For Groundbreaking Analysis and Visualizations
2016-11-23
The Pleiades supercomputer at NASA's Ames Research Center, recently named the 13th fastest computer in the world, provides scientists and researchers high-fidelity numerical modeling of complex systems and processes. By using detailed analyses and visualizations of large-scale data, Pleiades is helping to advance human knowledge and technology, from designing the next generation of aircraft and spacecraft to understanding the Earth's climate and the mysteries of our galaxy.
Overview of MSFC's Applied Fluid Dynamics Analysis Group Activities
NASA Technical Reports Server (NTRS)
Garcia, Roberto; Griffin, Lisa; Williams, Robert
2002-01-01
This viewgraph report presents an overview of activities and accomplishments of NASA's Marshall Space Flight Center's Applied Fluid Dynamics Analysis Group. Expertise in this group focuses on high-fidelity fluids design and analysis with application to space shuttle propulsion and next generation launch technologies. Topics covered include: computational fluid dynamics research and goals, turbomachinery research and activities, nozzle research and activities, combustion devices, engine systems, MDA development and CFD process improvements.
Load Balancing Strategies for Multi-Block Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)
2002-01-01
The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.
NASA Astrophysics Data System (ADS)
Amiraux, Mathieu
Rotorcraft Blade-Vortex Interaction (BVI) remains one of the most challenging flow phenomenon to simulate numerically. Over the past decade, the HART-II rotor test and its extensive experimental dataset has been a major database for validation of CFD codes. Its strong BVI signature, with high levels of intrusive noise and vibrations, makes it a difficult test for computational methods. The main challenge is to accurately capture and preserve the vortices which interact with the rotor, while predicting correct blade deformations and loading. This doctoral dissertation presents the application of a coupled CFD/CSD methodology to the problem of helicopter BVI and compares three levels of fidelity for aerodynamic modeling: a hybrid lifting-line/free-wake (wake coupling) method, with modified compressible unsteady model; a hybrid URANS/free-wake method; and a URANS-based wake capturing method, using multiple overset meshes to capture the entire flow field. To further increase numerical correlation, three helicopter fuselage models are implemented in the framework. The first is a high resolution 3D GPU panel code; the second is an immersed boundary based method, with 3D elliptic grid adaption; the last one uses a body-fitted, curvilinear fuselage mesh. The main contribution of this work is the implementation and systematic comparison of multiple numerical methods to perform BVI modeling. The trade-offs between solution accuracy and computational cost are highlighted for the different approaches. Various improvements have been made to each code to enhance physical fidelity, while advanced technologies, such as GPU computing, have been employed to increase efficiency. The resulting numerical setup covers all aspects of the simulation creating a truly multi-fidelity and multi-physics framework. Overall, the wake capturing approach showed the best BVI phasing correlation and good blade deflection predictions, with slightly under-predicted aerodynamic loading magnitudes. However, it proved to be much more expensive than the other two methods. Wake coupling with RANS solver had very good loading magnitude predictions, and therefore good acoustic intensities, with acceptable computational cost. The lifting-line based technique often had over-predicted aerodynamic levels, due to the degree of empiricism of the model, but its very short run-times, thanks to GPU technology, makes it a very attractive approach.
Injury representation against ballistic threats using three novel numerical models.
Breeze, Johno; Fryer, R; Pope, D; Clasper, J
2017-06-01
Injury modelling of ballistic threats is a valuable tool for informing policy on personal protective equipment and other injury mitigation methods. Currently, the Ministry of Defence (MoD) and Centre for Protection of National Infrastructure (CPNI) are focusing on the development of three interlinking numerical models, each of a different fidelity, to answer specific questions on current threats. High-fidelity models simulate the physical events most realistically, and will be used in the future to test the medical effectiveness of personal armour systems. They are however generally computationally intensive, slow running and much of the experimental data to base their algorithms on do not yet exist. Medium fidelity models, such as the personnel vulnerability simulation (PVS), generally use algorithms based on physical or engineering estimations of interaction. This enables a reasonable representation of reality and greatly speeds up runtime allowing full assessments of the entire body area to be undertaken. Low-fidelity models such as the human injury predictor (HIP) tool generally use simplistic algorithms to make injury predictions. Individual scenarios can be run very quickly and hence enable statistical casualty assessments of large groups, where significant uncertainty concerning the threat and affected population exist. HIP is used to simulate the blast and penetrative fragmentation effects of a terrorist detonation of an improvised explosive device within crowds of people in metropolitan environments. This paper describes the collaboration between MoD and CPNI using an example of all three fidelities of injury model and to highlight future areas of research that are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
See one, do one, teach one: advanced technology in medical education.
Vozenilek, John; Huff, J Stephen; Reznek, Martin; Gordon, James A
2004-11-01
The concept of "learning by doing" has become less acceptable, particularly when invasive procedures and high-risk care are required. Restrictions on medical educators have prompted them to seek alternative methods to teach medical knowledge and gain procedural experience. Fortunately, the last decade has seen an explosion of the number of tools available to enhance medical education: web-based education, virtual reality, and high fidelity patient simulation. This paper presents some of the consensus statements in regard to these tools agreed upon by members of the Educational Technology Section of the 2004 AEM Consensus Conference for Informatics and Technology in Emergency Department Health Care, held in Orlando, Florida. Web-based teaching: 1) Every ED should have access to medical educational materials via the Internet, computer-based training, and other effective education methods for point-of-service information, continuing medical education, and training. 2) Real-time automated tools should be integrated into Emergency Department Information Systems [EDIS] for contemporaneous education. Virtual reality [VR]: 1) Emergency physicians and emergency medicine societies should become more involved in VR development and assessment. 2) Nationally accepted protocols for the proper assessment of VR applications should be adopted and large multi-center groups should be formed to perform these studies. High-fidelity simulation: Emergency medicine residency programs should consider the use of high-fidelity patient simulators to enhance the teaching and evaluation of core competencies among trainees. Across specialties, patient simulation, virtual reality, and the Web will soon enable medical students and residents to... see one, simulate many, do one competently, and teach everyone.
Efficient Numerical Simulation of Aerothermoelastic Hypersonic Vehicles
NASA Astrophysics Data System (ADS)
Klock, Ryan J.
Hypersonic vehicles operate in a high-energy flight environment characterized by high dynamic pressures, high thermal loads, and non-equilibrium flow dynamics. This environment induces strong fluid, thermal, and structural dynamics interactions that are unique to this flight regime. If these vehicles are to be effectively designed and controlled, then a robust and intuitive understanding of each of these disciplines must be developed not only in isolation, but also when coupled. Limitations on scaling and the availability of adequate test facilities mean that physical investigation is infeasible. Ever growing computational power offers the ability to perform elaborate numerical simulations, but also has its own limitations. The state of the art in numerical simulation is either to create ever more high-fidelity physics models that do not couple well and require too much processing power to consider more than a few seconds of flight, or to use low-fidelity analytical models that can be tightly coupled and processed quickly, but do not represent realistic systems due to their simplifying assumptions. Reduced-order models offer a middle ground by distilling the dominant trends of high-fidelity training solutions into a form that can be quickly processed and more tightly coupled. This thesis presents a variably coupled, variable-fidelity, aerothermoelastic framework for the simulation and analysis of high-speed vehicle systems using analytical, reduced-order, and surrogate modeling techniques. Full launch-to-landing flights of complete vehicles are considered and used to define flight envelopes with aeroelastic, aerothermal, and thermoelastic limits, tune in-the-loop flight controllers, and inform future design considerations. A partitioned approach to vehicle simulation is considered in which regions dominated by particular combinations of processes are made separate from the overall solution and simulated by a specialized set of models to improve overall processing speed and overall solution fidelity. A number of enhancements to this framework are made through 1. the implementation of a publish-subscribe code architecture for rapid prototyping of physics and process models. 2. the implementation of a selection of linearization and model identification methods including high-order pseudo-time forward difference, complex-step, and direct identification from ordinary differential equation inspection. 3. improvements to the aeroheating and thermal models with non-equilibrium gas dynamics and generalized temperature dependent material thermal properties. A variety of model reduction and surrogate model techniques are applied to a representative hypersonic vehicle on a terminal trajectory to enable complete aerothermoelastic flight simulations. Multiple terminal trajectories of various starting altitudes and Mach numbers are optimized to maximize final kinetic energy of the vehicle upon reaching the surface. Surrogate models are compared to represent the variation of material thermal properties with temperature. A new method is developed and shown to be both accurate and computationally efficient. While the numerically efficient simulation of high-speed vehicles is developed within the presented framework, the goal of real time simulation is hampered by the necessity of multiple nested convergence loops. An alternative all-in-one surrogate model method is developed based on singular-value decomposition and regression that is near real time. Finally, the aeroelastic stability of pressurized cylindrical shells is investigated in the context of a maneuvering axisymmetric high-speed vehicle. Moderate internal pressurization is numerically shown to decrease stability, as showed experimentally in the literature, yet not well reproduced analytically. Insights are drawn from time simulation results and used to inform approaches for future vehicle model development.
An entangled-light-emitting diode.
Salter, C L; Stevenson, R M; Farrer, I; Nicoll, C A; Ritchie, D A; Shields, A J
2010-06-03
An optical quantum computer, powerful enough to solve problems so far intractable using conventional digital logic, requires a large number of entangled photons. At present, entangled-light sources are optically driven with lasers, which are impractical for quantum computing owing to the bulk and complexity of the optics required for large-scale applications. Parametric down-conversion is the most widely used source of entangled light, and has been used to implement non-destructive quantum logic gates. However, these sources are Poissonian and probabilistically emit zero or multiple entangled photon pairs in most cycles, fundamentally limiting the success probability of quantum computational operations. These complications can be overcome by using an electrically driven on-demand source of entangled photon pairs, but so far such a source has not been produced. Here we report the realization of an electrically driven source of entangled photon pairs, consisting of a quantum dot embedded in a semiconductor light-emitting diode (LED) structure. We show that the device emits entangled photon pairs under d.c. and a.c. injection, the latter achieving an entanglement fidelity of up to 0.82. Entangled light with such high fidelity is sufficient for application in quantum relays, in core components of quantum computing such as teleportation, and in entanglement swapping. The a.c. operation of the entangled-light-emitting diode (ELED) indicates its potential function as an on-demand source without the need for a complicated laser driving system; consequently, the ELED is at present the best source on which to base future scalable quantum information applications.
Cultured High-Fidelity Three-Dimensional Human Urogenital Tract Carcinomas and Process
NASA Technical Reports Server (NTRS)
Goodwin, Thomas J. (Inventor); Prewett, Tacey L. (Inventor); Spaulding, Glenn F. (Inventor); Wolf, David A. (Inventor)
1998-01-01
Artificial high-fidelity three-dimensional human urogenital tract carcinomas are propagated under in vitro-microgravity conditions from carcinoma cells. Artificial high-fidelity three-dimensional human urogenital tract carcinomas are also propagated from a coculture of normal urogenital tract cells inoculated with carcinoma cells. The microgravity culture conditions may be microgravity or simulated microgravity created in a horizontal rotating wall culture vessel.
Visualizing staggered fields and analyzing electromagnetic data with PerceptEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shasharina, Svetlana
This project resulted in VSimSP: a software for simulating large photonic devices of high-performance computers. It includes: GUI for Photonics Simulations; High-Performance Meshing Algorithm; 2d Order Multimaterials Algorithm; Mode Solver for Waveguides; 2d Order Material Dispersion Algorithm; S Parameters Calculation; High-Performance Workflow at NERSC ; and Large Photonic Devices Simulation Setups We believe we became the only company in the world which can simulate large photonics devices in 3D on modern supercomputers without the need to split them into subparts or do low-fidelity modeling. We started commercial engagement with a manufacturing company.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J; Gao, H
2016-06-15
Purpose: Different from the conventional computed tomography (CT), spectral CT based on energy-resolved photon-counting detectors is able to provide the unprecedented material composition. However, an important missing piece for accurate spectral CT is to incorporate the detector response function (DRF), which is distorted by factors such as pulse pileup and charge-sharing. In this work, we propose material reconstruction methods for spectral CT with DRF. Methods: The polyenergetic X-ray forward model takes the DRF into account for accurate material reconstruction. Two image reconstruction methods are proposed: a direct method based on the nonlinear data fidelity from DRF-based forward model; a linear-data-fidelitymore » based method that relies on the spectral rebinning so that the corresponding DRF matrix is invertible. Then the image reconstruction problem is regularized with the isotropic TV term and solved by alternating direction method of multipliers. Results: The simulation results suggest that the proposed methods provided more accurate material compositions than the standard method without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Conclusion: We have proposed material reconstruction methods for spectral CT with DRF, whichprovided more accurate material compositions than the standard methods without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Jiulong Liu and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
Modeling and Simulation of Explosively Driven Electromechanical Devices
NASA Astrophysics Data System (ADS)
Demmie, Paul N.
2002-07-01
Components that store electrical energy in ferroelectric materials and produce currents when their permittivity is explosively reduced are used in a variety of applications. The modeling and simulation of such devices is a challenging problem since one has to represent the coupled physics of detonation, shock propagation, and electromagnetic field generation. The high fidelity modeling and simulation of complicated electromechanical devices was not feasible prior to having the Accelerated Strategic Computing Initiative (ASCI) computers and the ASCI developed codes at Sandia National Laboratories (SNL). The EMMA computer code is used to model such devices and simulate their operation. In this paper, I discuss the capabilities of the EMMA code for the modeling and simulation of one such electromechanical device, a slim-loop ferroelectric (SFE) firing set.
Direct Synthesis of Microwave Waveforms for Quantum Computing
NASA Astrophysics Data System (ADS)
Raftery, James; Vrajitoarea, Andrei; Zhang, Gengyan; Leng, Zhaoqi; Srinivasan, Srikanth; Houck, Andrew
Current state of the art quantum computing experiments in the microwave regime use control pulses generated by modulating microwave tones with baseband signals generated by an arbitrary waveform generator (AWG). Recent advances in digital analog conversion technology have made it possible to directly synthesize arbitrary microwave pulses with sampling rates of 65 gigasamples per second (GSa/s) or higher. These new ultra-wide bandwidth AWG's could dramatically simplify the classical control chain for quantum computing experiments, presenting potential cost savings and reducing the number of components that need to be carefully calibrated. Here we use a Keysight M8195A AWG to study the viability of such a simplified scheme, demonstrating randomized benchmarking of a superconducting qubit with high fidelity.
The effect of model fidelity on prediction of char burnout for single-particle coal combustion
McConnell, Josh; Sutherland, James C.
2016-07-09
In this study, practical simulation of industrial-scale coal combustion relies on the ability to accurately capture the dynamics of coal subprocesses while also ensuring the computational cost remains reasonable. The majority of the residence time occurs post-devolatilization, so it is of great importance that a balance between the computational efficiency and accuracy of char combustion models is carefully considered. In this work, we consider the importance of model fidelity during char combustion by comparing combinations of simple and complex gas and particle-phase chemistry models. Detailed kinetics based on the GRI 3.0 mechanism and infinitely-fast chemistry are considered in the gas-phase.more » The Char Conversion Kinetics model and nth-Order Langmuir–Hinshelwood model are considered for char consumption. For devolatilization, the Chemical Percolation and Devolatilization and Kobayashi-Sarofim models are employed. The relative importance of gasification versus oxidation reactions in air and oxyfuel environments is also examined for various coal types. Results are compared to previously published experimental data collected under laminar, single-particle conditions. Calculated particle temperature histories are strongly dependent on the choice of gas phase and char chemistry models, but only weakly dependent on the chosen devolatilization model. Particle mass calculations were found to be very sensitive to the choice of devolatilization model, but only somewhat sensitive to the choice of gas chemistry and char chemistry models. High-fidelity models for devolatilization generally resulted in particle temperature and mass calculations that were closer to experimentally observed values.« less
The effect of model fidelity on prediction of char burnout for single-particle coal combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
McConnell, Josh; Sutherland, James C.
In this study, practical simulation of industrial-scale coal combustion relies on the ability to accurately capture the dynamics of coal subprocesses while also ensuring the computational cost remains reasonable. The majority of the residence time occurs post-devolatilization, so it is of great importance that a balance between the computational efficiency and accuracy of char combustion models is carefully considered. In this work, we consider the importance of model fidelity during char combustion by comparing combinations of simple and complex gas and particle-phase chemistry models. Detailed kinetics based on the GRI 3.0 mechanism and infinitely-fast chemistry are considered in the gas-phase.more » The Char Conversion Kinetics model and nth-Order Langmuir–Hinshelwood model are considered for char consumption. For devolatilization, the Chemical Percolation and Devolatilization and Kobayashi-Sarofim models are employed. The relative importance of gasification versus oxidation reactions in air and oxyfuel environments is also examined for various coal types. Results are compared to previously published experimental data collected under laminar, single-particle conditions. Calculated particle temperature histories are strongly dependent on the choice of gas phase and char chemistry models, but only weakly dependent on the chosen devolatilization model. Particle mass calculations were found to be very sensitive to the choice of devolatilization model, but only somewhat sensitive to the choice of gas chemistry and char chemistry models. High-fidelity models for devolatilization generally resulted in particle temperature and mass calculations that were closer to experimentally observed values.« less
Muhonen, J T; Laucht, A; Simmons, S; Dehollain, J P; Kalra, R; Hudson, F E; Freer, S; Itoh, K M; Jamieson, D N; McCallum, J C; Dzurak, A S; Morello, A
2015-04-22
Building upon the demonstration of coherent control and single-shot readout of the electron and nuclear spins of individual (31)P atoms in silicon, we present here a systematic experimental estimate of quantum gate fidelities using randomized benchmarking of 1-qubit gates in the Clifford group. We apply this analysis to the electron and the ionized (31)P nucleus of a single P donor in isotopically purified (28)Si. We find average gate fidelities of 99.95% for the electron and 99.99% for the nuclear spin. These values are above certain error correction thresholds and demonstrate the potential of donor-based quantum computing in silicon. By studying the influence of the shape and power of the control pulses, we find evidence that the present limitation to the gate fidelity is mostly related to the external hardware and not the intrinsic behaviour of the qubit.
Imperfect construction of microclusters
NASA Astrophysics Data System (ADS)
Schneider, E.; Zhou, K.; Gilbert, G.; Weinstein, Y. S.
2014-01-01
Microclusters are the basic building blocks used to construct cluster states capable of supporting fault-tolerant quantum computation. In this paper, we explore the consequences of errors on microcluster construction using two error models. To quantify the effect of the errors we calculate the fidelity of the constructed microclusters and the fidelity with which two such microclusters can be fused together. Such simulations are vital for gauging the capability of an experimental system to achieve fault tolerance.
Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing
NASA Astrophysics Data System (ADS)
Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey
Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.
Pulse sequences for suppressing leakage in single-qubit gate operations
NASA Astrophysics Data System (ADS)
Ghosh, Joydip; Coppersmith, S. N.; Friesen, Mark
2017-06-01
Many realizations of solid-state qubits involve couplings to leakage states lying outside the computational subspace, posing a threat to high-fidelity quantum gate operations. Mitigating leakage errors is especially challenging when the coupling strength is unknown, e.g., when it is caused by noise. Here we show that simple pulse sequences can be used to strongly suppress leakage errors for a qubit embedded in a three-level system. As an example, we apply our scheme to the recently proposed charge quadrupole (CQ) qubit for quantum dots. These results provide a solution to a key challenge for fault-tolerant quantum computing with solid-state elements.
Particle tracking acceleration via signed distance fields in direct-accelerated geometry Monte Carlo
Shriwise, Patrick C.; Davis, Andrew; Jacobson, Lucas J.; ...
2017-08-26
Computer-aided design (CAD)-based Monte Carlo radiation transport is of value to the nuclear engineering community for its ability to conduct transport on high-fidelity models of nuclear systems, but it is more computationally expensive than native geometry representations. This work describes the adaptation of a rendering data structure, the signed distance field, as a geometric query tool for accelerating CAD-based transport in the direct-accelerated geometry Monte Carlo toolkit. Demonstrations of its effectiveness are shown for several problems. The beginnings of a predictive model for the data structure's utilization based on various problem parameters is also introduced.
High-speed linear optics quantum computing using active feed-forward.
Prevedel, Robert; Walther, Philip; Tiefenbacher, Felix; Böhi, Pascal; Kaltenbaek, Rainer; Jennewein, Thomas; Zeilinger, Anton
2007-01-04
As information carriers in quantum computing, photonic qubits have the advantage of undergoing negligible decoherence. However, the absence of any significant photon-photon interaction is problematic for the realization of non-trivial two-qubit gates. One solution is to introduce an effective nonlinearity by measurements resulting in probabilistic gate operations. In one-way quantum computation, the random quantum measurement error can be overcome by applying a feed-forward technique, such that the future measurement basis depends on earlier measurement results. This technique is crucial for achieving deterministic quantum computation once a cluster state (the highly entangled multiparticle state on which one-way quantum computation is based) is prepared. Here we realize a concatenated scheme of measurement and active feed-forward in a one-way quantum computing experiment. We demonstrate that, for a perfect cluster state and no photon loss, our quantum computation scheme would operate with good fidelity and that our feed-forward components function with very high speed and low error for detected photons. With present technology, the individual computational step (in our case the individual feed-forward cycle) can be operated in less than 150 ns using electro-optical modulators. This is an important result for the future development of one-way quantum computers, whose large-scale implementation will depend on advances in the production and detection of the required highly entangled cluster states.
NASA Astrophysics Data System (ADS)
Modgil, Girish A.
Gas turbine engines for aerospace applications have evolved dramatically over the last 50 years through the constant pursuit for better specific fuel consumption, higher thrust-to-weight ratio, lower noise and emissions all while maintaining reliability and affordability. An important step in enabling these improvements is a forced response aeromechanics analysis involving structural dynamics and aerodynamics of the turbine. It is well documented that forced response vibration is a very critical problem in aircraft engine design, causing High Cycle Fatigue (HCF). Pushing the envelope on engine design has led to increased forced response problems and subsequently an increased risk of HCF failure. Forced response analysis is used to assess design feasibility of turbine blades for HCF using a material limit boundary set by the Goodman Diagram envelope that combines the effects of steady and vibratory stresses. Forced response analysis is computationally expensive, time consuming and requires multi-domain experts to finalize a result. As a consequence, high-fidelity aeromechanics analysis is performed deterministically and is usually done at the end of the blade design process when it is very costly to make significant changes to geometry or aerodynamic design. To address uncertainties in the system (engine operating point, temperature distribution, mistuning, etc.) and variability in material properties, designers apply conservative safety factors in the traditional deterministic approach, which leads to bulky designs. Moreover, using a deterministic approach does not provide a calculated risk of HCF failure. This thesis describes a process that begins with the optimal aerodynamic design of a turbomachinery blade developed using surrogate models of high-fidelity analyses. The resulting optimal blade undergoes probabilistic evaluation to generate aeromechanics results that provide a calculated likelihood of failure from HCF. An existing Rolls-Royce High Work Single Stage (HWSS) turbine blisk provides a baseline to demonstrate the process. The generalized polynomial chaos (gPC) toolbox which was developed includes sampling methods and constructs polynomial approximations. The toolbox provides not only the means for uncertainty quantification of the final blade design, but also facilitates construction of the surrogate models used for the blade optimization. This paper shows that gPC , with a small number of samples, achieves very fast rates of convergence and high accuracy in describing probability distributions without loss of detail in the tails . First, an optimization problem maximizes stage efficiency using turbine aerodynamic design rules as constraints; the function evaluations for this optimization are surrogate models from detailed 3D steady Computational Fluid Dynamics (CFD) analyses. The resulting optimal shape provides a starting point for the 3D high-fidelity aeromechanics (unsteady CFD and 3D Finite Element Analysis (FEA)) UQ study assuming three uncertain input parameters. This investigation seeks to find the steady and vibratory stresses associated with the first torsion mode for the HWSS turbine blisk near maximum operating speed of the engine. Using gPC to provide uncertainty estimates of the steady and vibratory stresses enables the creation of a Probabilistic Goodman Diagram, which - to the authors' best knowledge - is the first of its kind using high fidelity aeromechanics for turbomachinery blades. The Probabilistic Goodman Diagram enables turbine blade designers to make more informed design decisions and it allows the aeromechanics expert to assess quantitatively the risk associated with HCF for any mode crossing based on high fidelity simulations.
WEC3: Wave Energy Converter Code Comparison Project: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien
This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to modelmore » hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.« less
User's Guide for ENSAERO_FE Parallel Finite Element Solver
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.; Guruswamy, Guru P.
1999-01-01
A high fidelity parallel static structural analysis capability is created and interfaced to the multidisciplinary analysis package ENSAERO-MPI of Ames Research Center. This new module replaces ENSAERO's lower fidelity simple finite element and modal modules. Full aircraft structures may be more accurately modeled using the new finite element capability. Parallel computation is performed by breaking the full structure into multiple substructures. This approach is conceptually similar to ENSAERO's multizonal fluid analysis capability. The new substructure code is used to solve the structural finite element equations for each substructure in parallel. NASTRANKOSMIC is utilized as a front end for this code. Its full library of elements can be used to create an accurate and realistic aircraft model. It is used to create the stiffness matrices for each substructure. The new parallel code then uses an iterative preconditioned conjugate gradient method to solve the global structural equations for the substructure boundary nodes.
Effects of VR system fidelity on analyzing isosurface visualization of volume datasets.
Laha, Bireswar; Bowman, Doug A; Socha, John J
2014-04-01
Volume visualization is an important technique for analyzing datasets from a variety of different scientific domains. Volume data analysis is inherently difficult because volumes are three-dimensional, dense, and unfamiliar, requiring scientists to precisely control the viewpoint and to make precise spatial judgments. Researchers have proposed that more immersive (higher fidelity) VR systems might improve task performance with volume datasets, and significant results tied to different components of display fidelity have been reported. However, more information is needed to generalize these results to different task types, domains, and rendering styles. We visualized isosurfaces extracted from synchrotron microscopic computed tomography (SR-μCT) scans of beetles, in a CAVE-like display. We ran a controlled experiment evaluating the effects of three components of system fidelity (field of regard, stereoscopy, and head tracking) on a variety of abstract task categories that are applicable to various scientific domains, and also compared our results with those from our prior experiment using 3D texture-based rendering. We report many significant findings. For example, for search and spatial judgment tasks with isosurface visualization, a stereoscopic display provides better performance, but for tasks with 3D texture-based rendering, displays with higher field of regard were more effective, independent of the levels of the other display components. We also found that systems with high field of regard and head tracking improve performance in spatial judgment tasks. Our results extend existing knowledge and produce new guidelines for designing VR systems to improve the effectiveness of volume data analysis.
Spadaro, Savino; Karbing, Dan Stieper; Fogagnolo, Alberto; Ragazzi, Riccardo; Mojoli, Francesco; Astolfi, Luca; Gioia, Antonio; Marangoni, Elisabetta; Rees, Stephen Edward; Volta, Carlo Alberto
2017-12-01
Advances in knowledge regarding mechanical ventilation (MV), in particular lung-protective ventilation strategies, have been shown to reduce mortality. However, the translation of these advances in knowledge into better therapeutic performance in real-life clinical settings continues to lag. High-fidelity simulation with a mannequin allows students to interact in lifelike situations; this may be a valuable addition to traditional didactic teaching. The purpose of this study is to compare computer-based and mannequin-based approaches for training residents on MV. This prospective randomized single-blind trial involved 50 residents. All participants attended the same didactic lecture on respiratory pathophysiology and were subsequently randomized into two groups: the mannequin group (n = 25) and the computer screen-based simulator group (n = 25). One week later, each underwent a training assessment using five different scenarios of acute respiratory failure of different etiologies. Later, both groups underwent further testing of patient management, using in situ high-fidelity simulation of a patient with acute respiratory distress syndrome. Baseline knowledge was not significantly different between the two groups (P = 0.72). Regarding the training assessment, no significant differences were detected between the groups. In the final assessment, the scores of only the mannequin group significantly improved between the training and final session in terms of either global rating score [3.0 (2.5-4.0) vs. 2.0 (2.0-3.0), P = 0.005] or percentage of key score (82% vs. 71%, P = 0.001). Mannequin-based simulation has the potential to improve skills in managing MV.
NASA Technical Reports Server (NTRS)
Arnold, Steven M; Bednarcyk, Brett; Aboydi, Jacob
2004-01-01
The High-Fidelity Generalized Method of Cells (HFGMC) micromechanics model has recently been reformulated by Bansal and Pindera (in the context of elastic phases with perfect bonding) to maximize its computational efficiency. This reformulated version of HFGMC has now been extended to include both inelastic phases and imperfect fiber-matrix bonding. The present paper presents an overview of the HFGMC theory in both its original and reformulated forms and a comparison of the results of the two implementations. The objective is to establish the correlation between the two HFGMC formulations and document the improved efficiency offered by the reformulation. The results compare the macro and micro scale predictions of the continuous reinforcement (doubly-periodic) and discontinuous reinforcement (triply-periodic) versions of both formulations into the inelastic regime, and, in the case of the discontinuous reinforcement version, with both perfect and weak interfacial bonding. The results demonstrate that identical predictions are obtained using either the original or reformulated implementations of HFGMC aside from small numerical differences in the inelastic regime due to the different implementation schemes used for the inelastic terms present in the two formulations. Finally, a direct comparison of execution times is presented for the original formulation and reformulation code implementations. It is shown that as the discretization employed in representing the composite repeating unit cell becomes increasingly refined (requiring a larger number of sub-volumes), the reformulated implementation becomes significantly (approximately an order of magnitude at best) more computationally efficient in both the continuous reinforcement (doubly-periodic) and discontinuous reinforcement (triply-periodic) cases.
NASA Astrophysics Data System (ADS)
van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis
2014-11-01
In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.
Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro; ...
2017-11-06
The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. In order to resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. Here, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled withmore » a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.« less
NASA Astrophysics Data System (ADS)
Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro; Lim, Hojun; Littlewood, David J.
2018-02-01
The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. To resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. In this study, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled with a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro
The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. In order to resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. Here, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled withmore » a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.« less
NASA Astrophysics Data System (ADS)
Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael
2015-06-01
Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.
NASA Astrophysics Data System (ADS)
Klewicki, J. C.; Chini, G. P.; Gibson, J. F.
2017-03-01
Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.
A high-quality high-fidelity visualization of the September 11 attack on the World Trade Center.
Rosen, Paul; Popescu, Voicu; Hoffmann, Christoph; Irfanoglu, Ayhan
2008-01-01
In this application paper, we describe the efforts of a multidisciplinary team towards producing a visualization of the September 11 Attack on the North Tower of New York's World Trade Center. The visualization was designed to meet two requirements. First, the visualization had to depict the impact with high fidelity, by closely following the laws of physics. Second, the visualization had to be eloquent to a nonexpert user. This was achieved by first designing and computing a finite-element analysis (FEA) simulation of the impact between the aircraft and the top 20 stories of the building, and then by visualizing the FEA results with a state-of-the-art commercial animation system. The visualization was enabled by an automatic translator that converts the simulation data into an animation system 3D scene. We built upon a previously developed translator. The translator was substantially extended to enable and control visualization of fire and of disintegrating elements, to better scale with the number of nodes and number of states, to handle beam elements with complex profiles, and to handle smoothed particle hydrodynamics liquid representation. The resulting translator is a powerful automatic and scalable tool for high-quality visualization of FEA results.
Enabling parallel simulation of large-scale HPC network systems
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; ...
2016-04-07
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
Enabling parallel simulation of large-scale HPC network systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
ERIC Educational Resources Information Center
Raedeke, Thomas D.; Dlugonski, Deirdre
2017-01-01
Purpose: This study was designed to compare a low versus high theoretical fidelity pedometer intervention applying social-cognitive theory on step counts and self-efficacy. Method: Fifty-six public university employees participated in a 10-week randomized controlled trial with 2 conditions that varied in theoretical fidelity. Participants in the…
A programmable two-qubit quantum processor in silicon
NASA Astrophysics Data System (ADS)
Watson, T. F.; Philips, S. G. J.; Kawakami, E.; Ward, D. R.; Scarlino, P.; Veldhorst, M.; Savage, D. E.; Lagally, M. G.; Friesen, Mark; Coppersmith, S. N.; Eriksson, M. A.; Vandersypen, L. M. K.
2018-03-01
Now that it is possible to achieve measurement and control fidelities for individual quantum bits (qubits) above the threshold for fault tolerance, attention is moving towards the difficult task of scaling up the number of physical qubits to the large numbers that are needed for fault-tolerant quantum computing. In this context, quantum-dot-based spin qubits could have substantial advantages over other types of qubit owing to their potential for all-electrical operation and ability to be integrated at high density onto an industrial platform. Initialization, readout and single- and two-qubit gates have been demonstrated in various quantum-dot-based qubit representations. However, as seen with small-scale demonstrations of quantum computers using other types of qubit, combining these elements leads to challenges related to qubit crosstalk, state leakage, calibration and control hardware. Here we overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch–Josza algorithm and the Grover search algorithm—canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85–89 per cent and concurrences of 73–82 per cent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.
A programmable two-qubit quantum processor in silicon.
Watson, T F; Philips, S G J; Kawakami, E; Ward, D R; Scarlino, P; Veldhorst, M; Savage, D E; Lagally, M G; Friesen, Mark; Coppersmith, S N; Eriksson, M A; Vandersypen, L M K
2018-03-29
Now that it is possible to achieve measurement and control fidelities for individual quantum bits (qubits) above the threshold for fault tolerance, attention is moving towards the difficult task of scaling up the number of physical qubits to the large numbers that are needed for fault-tolerant quantum computing. In this context, quantum-dot-based spin qubits could have substantial advantages over other types of qubit owing to their potential for all-electrical operation and ability to be integrated at high density onto an industrial platform. Initialization, readout and single- and two-qubit gates have been demonstrated in various quantum-dot-based qubit representations. However, as seen with small-scale demonstrations of quantum computers using other types of qubit, combining these elements leads to challenges related to qubit crosstalk, state leakage, calibration and control hardware. Here we overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch-Josza algorithm and the Grover search algorithm-canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85-89 per cent and concurrences of 73-82 per cent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Arakere, G.; Hariharan, A.; Pandurangan, B.
2012-06-01
The introduction of newer joining technologies like the so-called friction-stir welding (FSW) into automotive engineering entails the knowledge of the joint-material microstructure and properties. Since, the development of vehicles (including military vehicles capable of surviving blast and ballistic impacts) nowadays involves extensive use of the computational engineering analyses (CEA), robust high-fidelity material models are needed for the FSW joints. A two-level material-homogenization procedure is proposed and utilized in this study to help manage computational cost and computer storage requirements for such CEAs. The method utilizes experimental (microstructure, microhardness, tensile testing, and x-ray diffraction) data to construct: (a) the material model for each weld zone and (b) the material model for the entire weld. The procedure is validated by comparing its predictions with the predictions of more detailed but more costly computational analyses.
Guidelines for developing distributed virtual environment applications
NASA Astrophysics Data System (ADS)
Stytz, Martin R.; Banks, Sheila B.
1998-08-01
We have conducted a variety of projects that served to investigate the limits of virtual environments and distributed virtual environment (DVE) technology for the military and medical professions. The projects include an application that allows the user to interactively explore a high-fidelity, dynamic scale model of the Solar System and a high-fidelity, photorealistic, rapidly reconfigurable aircraft simulator. Additional projects are a project for observing, analyzing, and understanding the activity in a military distributed virtual environment, a project to develop a distributed threat simulator for training Air Force pilots, a virtual spaceplane to determine user interface requirements for a planned military spaceplane system, and an automated wingman for use in supplementing or replacing human-controlled systems in a DVE. The last two projects are a virtual environment user interface framework; and a project for training hospital emergency department personnel. In the process of designing and assembling the DVE applications in support of these projects, we have developed rules of thumb and insights into assembling DVE applications and the environment itself. In this paper, we open with a brief review of the applications that were the source for our insights and then present the lessons learned as a result of these projects. The lessons we have learned fall primarily into five areas. These areas are requirements development, software architecture, human-computer interaction, graphical database modeling, and construction of computer-generated forces.
Towards high fidelity numerical wave tanks for modelling coastal and ocean engineering processes
NASA Astrophysics Data System (ADS)
Cozzuto, G.; Dimakopoulos, A.; de Lataillade, T.; Kees, C. E.
2017-12-01
With the increasing availability of computational resources, the engineering and research community is gradually moving towards using high fidelity Comutational Fluid Mechanics (CFD) models to perform numerical tests for improving the understanding of physical processes pertaining to wave propapagation and interaction with the coastal environment and morphology, either physical or man-made. It is therefore important to be able to reproduce in these models the conditions that drive these processes. So far, in CFD models the norm is to use regular (linear or nonlinear) waves for performing numerical tests, however, only random waves exist in nature. In this work, we will initially present the verification and validation of numerical wave tanks based on Proteus, an open-soruce computational toolkit based on finite element analysis, with respect to the generation, propagation and absorption of random sea states comprising of long non-repeating wave sequences. Statistical and spectral processing of results demonstrate that the methodologies employed (including relaxation zone methods and moving wave paddles) are capable of producing results of similar quality to the wave tanks used in laboratories (Figure 1). Subsequently cases studies of modelling complex process relevant to coastal defences and floating structures such as sliding and overturning of composite breakwaters, heave and roll response of floating caissons are presented. Figure 1: Wave spectra in the numerical wave tank (coloured symbols), compared against the JONSWAP distribution
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor
2018-02-01
Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.
Advanced sensors and instrumentation
NASA Technical Reports Server (NTRS)
Calloway, Raymond S.; Zimmerman, Joe E.; Douglas, Kevin R.; Morrison, Rusty
1990-01-01
NASA is currently investigating the readiness of Advanced Sensors and Instrumentation to meet the requirements of new initiatives in space. The following technical objectives and technologies are briefly discussed: smart and nonintrusive sensors; onboard signal and data processing; high capacity and rate adaptive data acquisition systems; onboard computing; high capacity and rate onboard storage; efficient onboard data distribution; high capacity telemetry; ground and flight test support instrumentation; power distribution; and workstations, video/lighting. The requirements for high fidelity data (accuracy, frequency, quantity, spatial resolution) in hostile environments will continue to push the technology developers and users to extend the performance of their products and to develop new generations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glascoe, Lee; Gowardhan, Akshay; Lennox, Kristin
In the interest of promoting the international exchange of technical expertise, the US Department of Energy’s Office of Emergency Operations (NA-40) and the French Commissariat à l'Energie Atomique et aux énergies alternatives (CEA) requested that the National Atmospheric Release Advisory Center (NARAC) of Lawrence Livermore National Laboratory (LLNL) in Livermore, California host a joint table top exercise with experts in emergency management and atmospheric transport modeling. In this table top exercise, LLNL and CEA compared each other’s flow and dispersion models. The goal of the comparison is to facilitate the exchange of knowledge, capabilities, and practices, and to demonstrate themore » utility of modeling dispersal at different levels of computational fidelity. Two modeling approaches were examined, a regional scale modeling approach, appropriate for simple terrain and/or very large releases, and an urban scale modeling approach, appropriate for small releases in a city environment. This report is a summary of LLNL and CEA modeling efforts from this exercise. Two different types of LLNL and CEA models were employed in the analysis: urban-scale models (Aeolus CFD at LLNL/NARAC and Parallel- Micro-SWIFT-SPRAY, PMSS, at CEA) for analysis of a 5,000 Ci radiological release and Lagrangian Particle Dispersion Models (LODI at LLNL/NARAC and PSPRAY at CEA) for analysis of a much larger (500,000 Ci) regional radiological release. Two densely-populated urban locations were chosen: Chicago with its high-rise skyline and gridded street network and Paris with its more consistent, lower building height and complex unaligned street network. Each location was considered under early summer daytime and nighttime conditions. Different levels of fidelity were chosen for each scale: (1) lower fidelity mass-consistent diagnostic, intermediate fidelity Navier-Stokes RANS models, and higher fidelity Navier-Stokes LES for urban-scale analysis, and (2) lower-fidelity single-profile meteorology versus higher-fidelity three-dimensional gridded weather forecast for regional-scale analysis. Tradeoffs between computation time and the fidelity of the results are discussed for both scales. LES, for example, requires nearly 100 times more processor time than the mass-consistent diagnostic model or the RANS model, and seems better able to capture flow entrainment behind tall buildings. As anticipated, results obtained by LLNL and CEA at regional scale around Chicago and Paris look very similar in terms of both atmospheric dispersion of the radiological release and total effective dose. Both LLNL and CEA used the same meteorological data, Lagrangian particle dispersion models, and the same dose coefficients. LLNL and CEA urban-scale modeling results show consistent phenomenological behavior and predict similar impacted areas even though the detailed 3D flow patterns differ, particularly for the Chicago cases where differences in vertical entrainment behind tall buildings are particularly notable. Although RANS and LES (LLNL) models incorporate more detailed physics than do mass-consistent diagnostic flow models (CEA), it is not possible to reach definite conclusions about the prediction fidelity of the various models as experimental measurements were not available for comparison. Stronger conclusions about the relative performances of the models involved and evaluation of the tradeoffs involved in model simplification could be made with a systematic benchmarking of urban-scale modeling. This could be the purpose of a future US / French collaborative exercise.« less
Demonstration of a small programmable quantum computer with atomic qubits.
Debnath, S; Linke, N M; Figgatt, C; Landsman, K A; Wright, K; Monroe, C
2016-08-04
Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.
Demonstration of a small programmable quantum computer with atomic qubits
NASA Astrophysics Data System (ADS)
Debnath, S.; Linke, N. M.; Figgatt, C.; Landsman, K. A.; Wright, K.; Monroe, C.
2016-08-01
Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benitz, M. A.; Schmidt, D. P.; Lackner, M. A.
Hydrodynamic loads on the platforms of floating offshore wind turbines are often predicted with computer-aided engineering tools that employ Morison's equation and/or potential-flow theory. This work compares results from one such tool, FAST, NREL's wind turbine computer-aided engineering tool, and the computational fluid dynamics package, OpenFOAM, for the OC4-DeepCwind semi-submersible analyzed in the International Energy Agency Wind Task 30 project. Load predictions from HydroDyn, the offshore hydrodynamics module of FAST, are compared with high-fidelity results from OpenFOAM. HydroDyn uses a combination of Morison's equations and potential flow to predict the hydrodynamic forces on the structure. The implications of the assumptionsmore » in HydroDyn are evaluated based on this code-to-code comparison.« less
High-fidelity in vivo replication of DNA base shape mimics without Watson–Crick hydrogen bonds
Delaney, James C.; Henderson, Paul T.; Helquist, Sandra A.; Morales, Juan C.; Essigmann, John M.; Kool, Eric T.
2003-01-01
We report studies testing the importance of Watson–Crick hydrogen bonding, base-pair geometry, and steric effects during DNA replication in living bacterial cells. Nonpolar DNA base shape mimics of thymine and adenine (abbreviated F and Q, respectively) were introduced into Escherichia coli by insertion into a phage genome followed by transfection of the vector into bacteria. Genetic assays showed that these two base mimics were bypassed with moderate to high efficiency in the cells and with very high efficiency under damage-response (SOS induction) conditions. Under both sets of conditions, the T-shape mimic (F) encoded genetic information in the bacteria as if it were thymine, directing incorporation of adenine opposite it with high fidelity. Similarly, the A mimic (Q) directed incorporation of thymine opposite itself with high fidelity. The data establish that Watson–Crick hydrogen bonding is not necessary for high-fidelity replication of a base pair in vivo. The results suggest that recognition of DNA base shape alone serves as the most powerful determinant of fidelity during transfer of genetic information in a living organism. PMID:12676985
Validation of computer simulation training for esophagogastroduodenoscopy: Pilot study.
Sedlack, Robert E
2007-08-01
Little is known regarding the value of esophagogastroduodenoscopy (EGD) simulators in education. The purpose of the present paper was to validate the use of computer simulation in novice EGD training. In phase 1, expert endoscopists evaluated various aspects of simulation fidelity as compared to live endoscopy. Additionally, computer-recorded performance metrics were assessed by comparing the recorded scores from users of three different experience levels. In phase 2, the transfer of simulation-acquired skills to the clinical setting was assessed in a two-group, randomized pilot study. The setting was a large gastroenterology (GI) Fellowship training program; in phase 1, 21 subjects (seven expert, intermediate and novice endoscopist), made up the three experience groups. In phase 2, eight novice GI fellows were involved in the two-group, randomized portion of the study examining the transfer of simulation skills to the clinical setting. During the initial validation phase, each of the 21 subjects completed two standardized EDG scenarios on a computer simulator and their performance scores were recorded for seven parameters. Following this, staff participants completed a questionnaire evaluating various aspects of the simulator's fidelity. Finally, four novice GI fellows were randomly assigned to receive 6 h of simulator-augmented training (SAT group) in EGD prior to beginning 1 month of patient-based EGD training. The remaining fellows experienced 1 month of patient-based training alone (PBT group). Results of the seven measured performance parameters were compared between three groups of varying experience using a Wilcoxon ranked sum test. The staffs' simulator fidelity survey used a 7-point Likert scale (1, very unrealistic; 4, neutral; 7, very realistic) for each of the parameters examined. During the second phase of this study, supervising staff rated both SAT and PBT fellows' patient-based performance daily. Scoring in each skill was completed using a 7-point Likert scale (1, strongly disagree; 4, neutral; 7, strongly agree). Median scores were compared between groups using the Wilcoxon ranked sum test. Staff evaluations of fidelity found that only two of the parameters examined (anatomy and scope maneuverability) had a significant degree of realism. The remaining areas were felt to be limited in their fidelity. Of the computer-recorded performance scores, only the novice group could be reliably identified from the other two experience groups. In the clinical application phase, the median Patient Discomfort ratings were superior in the PBT group (6; interquartile range [IQR], 5-6) as compared to the SAT group (5; IQR, 4-6; P = 0.015). PBT fellows' ratings were also superior in Sedation, Patient Discomfort, Independence and Competence during various phases of the evaluation. At no point were SAT fellows rated higher than the PBT group in any of the parameters examined. This EGD simulator has limitations to the degree of fidelity and can differentiate only novice endoscopists from other levels of experience. Finally, skills learned during EGD simulation training do not appear to translate well into patient-based endoscopy skills. These findings suggest against a key element of validity for the use of this computer simulator in novice EGD training.
2008-03-01
multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space
Assessment of synthetic image fidelity
NASA Astrophysics Data System (ADS)
Mitchell, Kevin D.; Moorhead, Ian R.; Gilmore, Marilyn A.; Watson, Graham H.; Thomson, Mitch; Yates, T.; Troscianko, Tomasz; Tolhurst, David J.
2000-07-01
Computer generated imagery is increasingly used for a wide variety of purposes ranging from computer games to flight simulators to camouflage and sensor assessment. The fidelity required for this imagery is dependent on the anticipated use - for example when used for camouflage design it must be physically correct spectrally and spatially. The rendering techniques used will also depend upon the waveband being simulated, spatial resolution of the sensor and the required frame rate. Rendering of natural outdoor scenes is particularly demanding, because of the statistical variation in materials and illumination, atmospheric effects and the complex geometric structures of objects such as trees. The accuracy of the simulated imagery has tended to be assessed subjectively in the past. First and second order statistics do not capture many of the essential characteristics of natural scenes. Direct pixel comparison would impose an unachievable demand on the synthetic imagery. For many applications, such as camouflage design, it is important that nay metrics used will work in both visible and infrared wavebands. We are investigating a variety of different methods of comparing real and synthetic imagery and comparing synthetic imagery rendered to different levels of fidelity. These techniques will include neural networks (ICA), higher order statistics and models of human contrast perception. This paper will present an overview of the analyses we have carried out and some initial results along with some preliminary conclusions regarding the fidelity of synthetic imagery.
High Fidelity Simulation of Atomization in Diesel Engine Sprays
2015-09-01
ARL-RP-0555 ● SEP 2015 US Army Research Laboratory High Fidelity Simulation of Atomization in Diesel Engine Sprays by L Bravo...ARL-RP-0555 ● SEP 2015 US Army Research Laboratory High Fidelity Simulation of Atomization in Diesel Engine Sprays by L...Simulation of Atomization in Diesel Engine Sprays 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) L Bravo, CB Ivey, D
Implementation Fidelity in Community-Based Interventions
Breitenstein, Susan M.; Gross, Deborah; Garvey, Christine; Hill, Carri; Fogg, Louis; Resnick, Barbara
2012-01-01
Implementation fidelity is the degree to which an intervention is delivered as intended and is critical to successful translation of evidence-based interventions into practice. Diminished fidelity may be why interventions that work well in highly controlled trials may fail to yield the same outcomes when applied in real life contexts. The purpose of this paper is to define implementation fidelity and describe its importance for the larger science of implementation, discuss data collection methods and current efforts in measuring implementation fidelity in community-based prevention interventions, and present future research directions for measuring implementation fidelity that will advance implementation science. PMID:20198637
NPSS Multidisciplinary Integration and Analysis
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Rasche, Joseph; Simons, Todd A.; Hoyniak, Daniel
2006-01-01
The objective of this task was to enhance the capability of the Numerical Propulsion System Simulation (NPSS) by expanding its reach into the high-fidelity multidisciplinary analysis area. This task investigated numerical techniques to convert between cold static to hot running geometry of compressor blades. Numerical calculations of blade deformations were iteratively done with high fidelity flow simulations together with high fidelity structural analysis of the compressor blade. The flow simulations were performed with the Advanced Ducted Propfan Analysis (ADPAC) code, while structural analyses were performed with the ANSYS code. High fidelity analyses were used to evaluate the effects on performance of: variations in tip clearance, uncertainty in manufacturing tolerance, variable inlet guide vane scheduling, and the effects of rotational speed on the hot running geometry of the compressor blades.
Generation and applications of an ultrahigh-fidelity four-photon Greenberger-Horne-Zeilinger state.
Zhang, Chao; Huang, Yun-Feng; Zhang, Cheng-Jie; Wang, Jian; Liu, Bi-Heng; Li, Chuan-Feng; Guo, Guang-Can
2016-11-28
High-quality entangled photon pairs generated via spontaneous parametric down-conversion have made great contributions to the modern quantum information science and the fundamental tests of quantum mechanics. However, the quality of the entangled states decreases sharply when moving from biphoton to multiphoton experiments, mainly due to the lack of interactions between photons. Here, for the first time, we generate a four-photon Greenberger-Horne-Zeilinger state with a fidelity of 98%, which is even comparable to the best fidelity of biphoton entangled states. Thus, it enables us to demonstrate an ultrahigh-fidelity entanglement swapping-the key ingredient in various quantum information tasks. Our results push the fidelity of multiphoton entanglement generation to a new level and would be useful in some demanding tasks, e.g., we successfully demonstrate the genuine multipartite nonlocality of the observed state in the nonsignaling scenario by violating a novel Hardy-like inequality, which requires very high state-fidelity.
NASA Astrophysics Data System (ADS)
Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan
2016-04-01
A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.
Perdikaris, Paris; Karniadakis, George Em
2016-05-01
We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. © 2016 The Author(s).
Perdikaris, Paris; Karniadakis, George Em
2016-01-01
We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. PMID:27194481
Implementing a high-fidelity simulation program in a community college setting.
Tuoriniemi, Pamela; Schott-Baer, Darlene
2008-01-01
Despite their relatively high cost, there is heightened interest by faculty in undergraduate nursing programs to implement high-fidelity simulation (HFS) programs. High-fidelity simulators are appealing because they allow students to experience high-risk, low-volume patient problems in a realistic setting. The decision to purchase a simulator is the first step in the process of implementing and maintaining an HFS lab. Knowledge, technical skill, commitment, and considerable time are needed to develop a successful program. The process, as experienced by one community college nursing program, is described.
Denadai, Rafael; Oshiiwa, Marie; Saad-Hossne, Rogério
2014-03-01
The search for alternative and effective forms of training simulation is needed due to ethical and medico-legal aspects involved in training surgical skills on living patients, human cadavers and living animals. To evaluate if the bench model fidelity interferes in the acquisition of elliptical excision skills by novice medical students. Forty novice medical students were randomly assigned to 5 practice conditions with instructor-directed elliptical excision skills' training (n = 8): didactic materials (control); organic bench model (low-fidelity); ethylene-vinyl acetate bench model (low-fidelity); chicken legs' skin bench model (high-fidelity); or pig foot skin bench model (high-fidelity). Pre- and post-tests were applied. Global rating scale, effect size, and self-perceived confidence based on Likert scale were used to evaluate all elliptical excision performances. The analysis showed that after training, the students practicing on bench models had better performance based on Global rating scale (all P < 0.0000) and felt more confident to perform elliptical excision skills (all P < 0.0000) when compared to the control. There was no significant difference (all P > 0.05) between the groups that trained on bench models. The magnitude of the effect (basic cutaneous surgery skills' training) was considered large (>0.80) in all measurements. The acquisition of elliptical excision skills after instructor-directed training on low-fidelity bench models was similar to the training on high-fidelity bench models; and there was a more substantial increase in elliptical excision performances of students that trained on all simulators compared to the learning on didactic materials.
Wei, Hai-Rui; Deng, Fu-Guo
2013-07-29
We investigate the possibility of achieving scalable photonic quantum computing by the giant optical circular birefringence induced by a quantum-dot spin in a double-sided optical microcavity as a result of cavity quantum electrodynamics. We construct a deterministic controlled-not gate on two photonic qubits by two single-photon input-output processes and the readout on an electron-medium spin confined in an optical resonant microcavity. This idea could be applied to multi-qubit gates on photonic qubits and we give the quantum circuit for a three-photon Toffoli gate. High fidelities and high efficiencies could be achieved when the side leakage to the cavity loss rate is low. It is worth pointing out that our devices work in both the strong and the weak coupling regimes.
Integration of a CAD System Into an MDO Framework
NASA Technical Reports Server (NTRS)
Townsend, J. C.; Samareh, J. A.; Weston, R. P.; Zorumski, W. E.
1998-01-01
NASA Langley has developed a heterogeneous distributed computing environment, called the Framework for Inter-disciplinary Design Optimization, or FIDO. Its purpose has been to demonstrate framework technical feasibility and usefulness for optimizing the preliminary design of complex systems and to provide a working environment for testing optimization schemes. Its initial implementation has been for a simplified model of preliminary design of a high-speed civil transport. Upgrades being considered for the FIDO system include a more complete geometry description, required by high-fidelity aerodynamics and structures codes and based on a commercial Computer Aided Design (CAD) system. This report presents the philosophy behind some of the decisions that have shaped the FIDO system and gives a brief case study of the problems and successes encountered in integrating a CAD system into the FEDO framework.
Machine learning bandgaps of double perovskites
Pilania, G.; Mannodi-Kanakkithodi, A.; Uberuaga, B. P.; Ramprasad, R.; Gubernatis, J. E.; Lookman, T.
2016-01-01
The ability to make rapid and accurate predictions on bandgaps of double perovskites is of much practical interest for a range of applications. While quantum mechanical computations for high-fidelity bandgaps are enormously computation-time intensive and thus impractical in high throughput studies, informatics-based statistical learning approaches can be a promising alternative. Here we demonstrate a systematic feature-engineering approach and a robust learning framework for efficient and accurate predictions of electronic bandgaps of double perovskites. After evaluating a set of more than 1.2 million features, we identify lowest occupied Kohn-Sham levels and elemental electronegativities of the constituent atomic species as the most crucial and relevant predictors. The developed models are validated and tested using the best practices of data science and further analyzed to rationalize their prediction performance. PMID:26783247
Lu, Hsuan-Hao; Lukens, Joseph M.; Peters, Nicholas A.; ...
2018-01-18
In this paper, we report the experimental realization of high-fidelity photonic quantum gates for frequency-encoded qubits and qutrits based on electro-optic modulation and Fourier-transform pulse shaping. Our frequency version of the Hadamard gate offers near-unity fidelity (0.99998±0.00003), requires only a single microwave drive tone for near-ideal performance, functions across the entire C band (1530–1570 nm), and can operate concurrently on multiple qubits spaced as tightly as four frequency modes apart, with no observable degradation in the fidelity. For qutrits, we implement a 3×3 extension of the Hadamard gate: the balanced tritter. This tritter—the first ever demonstrated for frequency modes—attains fidelitymore » 0.9989±0.0004. Finally, these gates represent important building blocks toward scalable, high-fidelity quantum information processing based on frequency encoding.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Hsuan-Hao; Lukens, Joseph M.; Peters, Nicholas A.
In this paper, we report the experimental realization of high-fidelity photonic quantum gates for frequency-encoded qubits and qutrits based on electro-optic modulation and Fourier-transform pulse shaping. Our frequency version of the Hadamard gate offers near-unity fidelity (0.99998±0.00003), requires only a single microwave drive tone for near-ideal performance, functions across the entire C band (1530–1570 nm), and can operate concurrently on multiple qubits spaced as tightly as four frequency modes apart, with no observable degradation in the fidelity. For qutrits, we implement a 3×3 extension of the Hadamard gate: the balanced tritter. This tritter—the first ever demonstrated for frequency modes—attains fidelitymore » 0.9989±0.0004. Finally, these gates represent important building blocks toward scalable, high-fidelity quantum information processing based on frequency encoding.« less
Rotorcraft Research at the NASA Vertical Motion Simulator
NASA Technical Reports Server (NTRS)
Aponso, Bimal Lalith; Tran, Duc T.; Schroeder, Jeffrey A.
2009-01-01
In the 1970 s the role of the military helicopter evolved to encompass more demanding missions including low-level nap-of-the-earth flight and operation in severely degraded visual environments. The Vertical Motion Simulator (VMS) at the NASA Ames Research Center was built to provide a high-fidelity simulation capability to research new rotorcraft concepts and technologies that could satisfy these mission requirements. The VMS combines a high-fidelity large amplitude motion system with an adaptable simulation environment including interchangeable and configurable cockpits. In almost 30 years of operation, rotorcraft research on the VMS has contributed significantly to the knowledge-base on rotorcraft performance, handling qualities, flight control, and guidance and displays. These contributions have directly benefited current rotorcraft programs and flight safety. The high fidelity motion system in the VMS was also used to research simulation fidelity. This research provided a fundamental understanding of pilot cueing modalities and their effect on simulation fidelity.
Vermeulen, Joeri; Beeckman, Katrien; Turcksin, Rivka; Van Winkel, Lies; Gucciardo, Léonardo; Laubach, Monika; Peersman, Wim; Swinnen, Eva
2017-06-01
Simulation training is a powerful and evidence-based teaching method in healthcare. It allows students to develop essential competences that are often difficult to achieve during internships. High-Fidelity Perinatal Simulation exposes them to real-life scenarios in a safe environment. Although student midwives' experiences need to be considered to make the simulation training work, these have been overlooked so far. To explore the experiences of last-year student midwives with High-Fidelity Perinatal Simulation training. A qualitative descriptive study, using three focus group conversations with last-year student midwives (n=24). Audio tapes were transcribed and a thematic content analysis was performed. The entire data set was coded according to recurrent or common themes. To achieve investigator triangulation and confirm themes, discussions among the researchers was incorporated in the analysis. Students found High-Fidelity Perinatal Simulation training to be a positive learning method that increased both their competence and confidence. Their experiences varied over the different phases of the High-Fidelity Perinatal Simulation training. Although uncertainty, tension, confusion and disappointment were experienced throughout the simulation trajectory, they reported that this did not affect their learning and confidence-building. As High-Fidelity Perinatal Simulation training constitutes a helpful learning experience in midwifery education, it could have a positive influence on maternal and neonatal outcomes. In the long term, it could therefore enhance the midwifery profession in several ways. The present study is an important first step in opening up the debate about the pedagogical use of High-Fidelity Perinatal Simulation training within midwifery education. Copyright © 2017 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
REVEAL: An Extensible Reduced Order Model Builder for Simulation and Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Khushbu; Sharma, Poorva; Ma, Jinliang
2013-04-30
Many science domains need to build computationally efficient and accurate representations of high fidelity, computationally expensive simulations. These computationally efficient versions are known as reduced-order models. This paper presents the design and implementation of a novel reduced-order model (ROM) builder, the REVEAL toolset. This toolset generates ROMs based on science- and engineering-domain specific simulations executed on high performance computing (HPC) platforms. The toolset encompasses a range of sampling and regression methods that can be used to generate a ROM, automatically quantifies the ROM accuracy, and provides support for an iterative approach to improve ROM accuracy. REVEAL is designed to bemore » extensible in order to utilize the core functionality with any simulator that has published input and output formats. It also defines programmatic interfaces to include new sampling and regression techniques so that users can ‘mix and match’ mathematical techniques to best suit the characteristics of their model. In this paper, we describe the architecture of REVEAL and demonstrate its usage with a computational fluid dynamics model used in carbon capture.« less
Surrogate-based Analysis and Optimization
NASA Technical Reports Server (NTRS)
Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin
2005-01-01
A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.
VERA Core Simulator Methodology for PWR Cycle Depletion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kochunas, Brendan; Collins, Benjamin S; Jabaay, Daniel
2015-01-01
This paper describes the methodology developed and implemented in MPACT for performing high-fidelity pressurized water reactor (PWR) multi-cycle core physics calculations. MPACT is being developed primarily for application within the Consortium for the Advanced Simulation of Light Water Reactors (CASL) as one of the main components of the VERA Core Simulator, the others being COBRA-TF and ORIGEN. The methods summarized in this paper include a methodology for performing resonance self-shielding and computing macroscopic cross sections, 2-D/1-D transport, nuclide depletion, thermal-hydraulic feedback, and other supporting methods. These methods represent a minimal set needed to simulate high-fidelity models of a realistic nuclearmore » reactor. Results demonstrating this are presented from the simulation of a realistic model of the first cycle of Watts Bar Unit 1. The simulation, which approximates the cycle operation, is observed to be within 50 ppm boron (ppmB) reactivity for all simulated points in the cycle and approximately 15 ppmB for a consistent statepoint. The verification and validation of the PWR cycle depletion capability in MPACT is the focus of two companion papers.« less
Simulating Descent and Landing of a Spacecraft
NASA Technical Reports Server (NTRS)
Balaram, J.; Jain, Abhinandan; Martin, Bryan; Lim, Christopher; Henriquez, David; McMahon, Elihu; Sohl, Garrett; Banerjee, Pranab; Steele, Robert; Bentley, Timothy
2005-01-01
The Dynamics Simulator for Entry, Descent, and Surface landing (DSENDS) software performs high-fidelity simulation of the Entry, Descent, and Landing (EDL) of a spacecraft into the atmosphere and onto the surface of a planet or a smaller body. DSENDS is an extension of the DShell and DARTS programs, which afford capabilities for mathematical modeling of the dynamics of a spacecraft as a whole and of its instruments, actuators, and other subsystems. DSENDS enables the modeling (including real-time simulation) of flight-train elements and all spacecraft responses during various phases of EDL. DSENDS provides high-fidelity models of the aerodynamics of entry bodies and parachutes plus supporting models of atmospheres. Terrain and real-time responses of terrain-imaging radar and lidar instruments can also be modeled. The program includes modules for simulation of guidance, navigation, hypersonic steering, and powered descent. Automated state-machine-driven model switching is used to represent spacecraft separations and reconfigurations. Models for computing landing contact and impact forces are expected to be added. DSENDS can be used as a stand-alone program or incorporated into a larger program that simulates operations in real time.
Quantum entanglement at ambient conditions in a macroscopic solid-state spin ensemble.
Klimov, Paul V; Falk, Abram L; Christle, David J; Dobrovitski, Viatcheslav V; Awschalom, David D
2015-11-01
Entanglement is a key resource for quantum computers, quantum-communication networks, and high-precision sensors. Macroscopic spin ensembles have been historically important in the development of quantum algorithms for these prospective technologies and remain strong candidates for implementing them today. This strength derives from their long-lived quantum coherence, strong signal, and ability to couple collectively to external degrees of freedom. Nonetheless, preparing ensembles of genuinely entangled spin states has required high magnetic fields and cryogenic temperatures or photochemical reactions. We demonstrate that entanglement can be realized in solid-state spin ensembles at ambient conditions. We use hybrid registers comprising of electron-nuclear spin pairs that are localized at color-center defects in a commercial SiC wafer. We optically initialize 10(3) identical registers in a 40-μm(3) volume (with [Formula: see text] fidelity) and deterministically prepare them into the maximally entangled Bell states (with 0.88 ± 0.07 fidelity). To verify entanglement, we develop a register-specific quantum-state tomography protocol. The entanglement of a macroscopic solid-state spin ensemble at ambient conditions represents an important step toward practical quantum technology.
NASA Technical Reports Server (NTRS)
Bordano, Aldo; Uhde-Lacovara, JO; Devall, Ray; Partin, Charles; Sugano, Jeff; Doane, Kent; Compton, Jim
1993-01-01
The Navigation, Control and Aeronautics Division (NCAD) at NASA-JSC is exploring ways of producing Guidance, Navigation and Control (GN&C) flight software faster, better, and cheaper. To achieve these goals NCAD established two hardware/software facilities that take an avionics design project from initial inception through high fidelity real-time hardware-in-the-loop testing. Commercially available software products are used to develop the GN&C algorithms in block diagram form and then automatically generate source code from these diagrams. A high fidelity real-time hardware-in-the-loop laboratory provides users with the capability to analyze mass memory usage within the targeted flight computer, verify hardware interfaces, conduct system level verification, performance, acceptance testing, as well as mission verification using reconfigurable and mission unique data. To evaluate these concepts and tools, NCAD embarked on a project to build a real-time 6 DOF simulation of the Soyuz Assured Crew Return Vehicle flight software. To date, a productivity increase of 185 percent has been seen over traditional NASA methods for developing flight software.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-04-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-07-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
NASA Astrophysics Data System (ADS)
Sisodia, Mitali; Shukla, Abhishek; Pathak, Anirban
2017-12-01
A scheme for distributed quantum measurement that allows nondestructive or indirect Bell measurement was proposed by Gupta et al [1]. In the present work, Gupta et al.'s scheme is experimentally realized using the five-qubit super-conductivity-based quantum computer, which has been recently placed in cloud by IBM Corporation. The experiment confirmed that the Bell state can be constructed and measured in a nondestructive manner with a reasonably high fidelity. A comparison of the outcomes of this study and the results obtained earlier in an NMR-based experiment (Samal et al. (2010) [10]) has also been performed. The study indicates that to make a scalable SQUID-based quantum computer, errors introduced by the gates (in the present technology) have to be reduced considerably.
Human-Centered Design of Human-Computer-Human Dialogs in Aerospace Systems
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1998-01-01
A series of ongoing research programs at Georgia Tech established a need for a simulation support tool for aircraft computer-based aids. This led to the design and development of the Georgia Tech Electronic Flight Instrument Research Tool (GT-EFIRT). GT-EFIRT is a part-task flight simulator specifically designed to study aircraft display design and single pilot interaction. ne simulator, using commercially available graphics and Unix workstations, replicates to a high level of fidelity the Electronic Flight Instrument Systems (EFIS), Flight Management Computer (FMC) and Auto Flight Director System (AFDS) of the Boeing 757/767 aircraft. The simulator can be configured to present information using conventional looking B757n67 displays or next generation Primary Flight Displays (PFD) such as found on the Beech Starship and MD-11.
Applications of fidelity measures to complex quantum systems
2016-01-01
We revisit fidelity as a measure for the stability and the complexity of the quantum motion of single-and many-body systems. Within the context of cold atoms, we present an overview of applications of two fidelities, which we call static and dynamical fidelity, respectively. The static fidelity applies to quantum problems which can be diagonalized since it is defined via the eigenfunctions. In particular, we show that the static fidelity is a highly effective practical detector of avoided crossings characterizing the complexity of the systems and their evolutions. The dynamical fidelity is defined via the time-dependent wave functions. Focusing on the quantum kicked rotor system, we highlight a few practical applications of fidelity measurements in order to better understand the large variety of dynamical regimes of this paradigm of a low-dimensional system with mixed regular–chaotic phase space. PMID:27140967
Rapid Automated Aircraft Simulation Model Updating from Flight Data
NASA Technical Reports Server (NTRS)
Brian, Geoff; Morelli, Eugene A.
2011-01-01
Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.
Demonstration of a quantum controlled-NOT gate in the telecommunications band.
Chen, Jun; Altepeter, Joseph B; Medic, Milja; Lee, Kim Fook; Gokden, Burc; Hadfield, Robert H; Nam, Sae Woo; Kumar, Prem
2008-04-04
We present the first quantum controlled-not (cnot) gate realized using a fiber-based indistinguishable photon-pair source in the 1.55 microm telecommunications band. Using this free-space cnot gate, all four Bell states are produced and fully characterized by performing quantum-state tomography, demonstrating the gate's unambiguous entangling capability and high fidelity. Telecom-band operation makes this cnot gate particularly suitable for quantum-information-processing tasks that are at the interface of quantum communication and linear optical quantum computing.
High-Fidelity, Computational Modeling of Non-Equilibrium Discharges for Combustion Applications
2013-10-01
gradient reconstruction) 4th order RK time integration Domain decomposition parallel enabled Plasma chemistry mechanism 22 Methane-air... plasma chemistry mechanism Species and pathways relevant to plasma time scale (~10’s ns) 26 Species : E, O, N2 , O2 , H , N2+ , O2+ , N4+ , O4...Photoionization (3-term Helmholtz equation model) 0.0067 0.0447 0.0346 0.1121 0.3059 0.5994 Plasma chemistry mechanism used in studies 81
High-Fidelity Modeling of Computer Network Worms
2004-06-22
plots the propagation of the TCP-based worm. This execution is among the largest TCP worm models simulated to date at packet-level. TCP vs . UDP Worm...the mapping of the virtual IP addresses to honeyd’s MAC address in the proxy’s ARP table. The proxy server listens for packets from both sides of...experimental setup, we used two ntium-4 ThinkPad , and an IBM Pentium-III ThinkPad ), running the proxy server and honeyd respectively. The Code Red II worm
Implementation of a Text-Based Content Intervention in Secondary Social Studies Classes.
Wanzek, Jeanne; Vaughn, Sharon
2016-12-01
We describe teacher fidelity (adherence to the components of the treatment as specified by the research team) based on a series of studies of a multicomponent intervention, Promoting Acceleration of Comprehension and Content Through Text (PACT), with middle and high school social studies teachers and their students. Findings reveal that even with highly specified materials and implementing practices that are aligned with effective reading comprehension and content instruction, teachers' fidelity was consistently low for some components and high for others. Teachers demonstrated consistently high implementation fidelity and quality for the instructional components of building background knowledge (comprehension canopy) and teaching key content vocabulary (essential words), whereas we recorded consistently lower fidelity and quality of implementation for the instructional components of critical reading and knowledge application. © 2016 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Goodrich, Kenneth H.; McManus, John W.; Chappell, Alan R.
1992-01-01
A batch air combat simulation environment known as the Tactical Maneuvering Simulator (TMS) is presented. The TMS serves as a tool for developing and evaluating tactical maneuvering logics. The environment can also be used to evaluate the tactical implications of perturbations to aircraft performance or supporting systems. The TMS is capable of simulating air combat between any number of engagement participants, with practical limits imposed by computer memory and processing power. Aircraft are modeled using equations of motion, control laws, aerodynamics and propulsive characteristics equivalent to those used in high-fidelity piloted simulation. Databases representative of a modern high-performance aircraft with and without thrust-vectoring capability are included. To simplify the task of developing and implementing maneuvering logics in the TMS, an outer-loop control system known as the Tactical Autopilot (TA) is implemented in the aircraft simulation model. The TA converts guidance commands issued by computerized maneuvering logics in the form of desired angle-of-attack and wind axis-bank angle into inputs to the inner-loop control augmentation system of the aircraft. This report describes the capabilities and operation of the TMS.
NASA Astrophysics Data System (ADS)
Mudunuru, M. K.; Karra, S.; Vesselinov, V. V.
2017-12-01
The efficiency of many hydrogeological applications such as reactive-transport and contaminant remediation vastly depends on the macroscopic mixing occurring in the aquifer. In the case of remediation activities, it is fundamental to enhancement and control of the mixing through impact of the structure of flow field which is impacted by groundwater pumping/extraction, heterogeneity, and anisotropy of the flow medium. However, the relative importance of these hydrogeological parameters to understand mixing process is not well studied. This is partially because to understand and quantify mixing, one needs to perform multiple runs of high-fidelity numerical simulations for various subsurface model inputs. Typically, high-fidelity simulations of existing subsurface models take hours to complete on several thousands of processors. As a result, they may not be feasible to study the importance and impact of model inputs on mixing. Hence, there is a pressing need to develop computationally efficient models to accurately predict the desired QoIs for remediation and reactive-transport applications. An attractive way to construct computationally efficient models is through reduced-order modeling using machine learning. These approaches can substantially improve our capabilities to model and predict remediation process. Reduced-Order Models (ROMs) are similar to analytical solutions or lookup tables. However, the method in which ROMs are constructed is different. Here, we present a physics-informed ML framework to construct ROMs based on high-fidelity numerical simulations. First, random forests, F-test, and mutual information are used to evaluate the importance of model inputs. Second, SVMs are used to construct ROMs based on these inputs. These ROMs are then used to understand mixing under perturbed vortex flows. Finally, we construct scaling laws for certain important QoIs such as degree of mixing and product yield. Scaling law parameters dependence on model inputs are evaluated using cluster analysis. We demonstrate application of the developed method for model analyses of reactive-transport and contaminant remediation at the Los Alamos National Laboratory (LANL) chromium contamination sites. The developed method is directly applicable for analyses of alternative site remediation scenarios.
Stimulated Brillouin scattering continuous wave phase conjugation in step-index fiber optics.
Massey, Steven M; Spring, Justin B; Russell, Timothy H
2008-07-21
Continuous wave (CW) stimulated Brillouin scattering (SBS) phase conjugation in step-index optical fibers was studied experimentally and modeled as a function of fiber length. A phase conjugate fidelity over 80% was measured from SBS in a 40 m fiber using a pinhole technique. Fidelity decreases with fiber length, and a fiber with a numerical aperture (NA) of 0.06 was found to generate good phase conjugation fidelity over longer lengths than a fiber with 0.13 NA. Modeling and experiment support previous work showing the maximum interaction length which yields a high fidelity phase conjugate beam is inversely proportional to the fiber NA(2), but find that fidelity remains high over much longer fiber lengths than previous models calculated. Conditions for SBS beam cleanup in step-index fibers are discussed.
Phosphate-binding pocket in Dicer-2 PAZ domain for high-fidelity siRNA production
Kandasamy, Suresh K.
2016-01-01
The enzyme Dicer produces small silencing RNAs such as micro-RNAs (miRNAs) and small interfering RNAs (siRNAs). In Drosophila, Dicer-1 produces ∼22–24-nt miRNAs from pre-miRNAs, whereas Dicer-2 makes 21-nt siRNAs from long double-stranded RNAs (dsRNAs). How Dicer-2 precisely makes 21-nt siRNAs with a remarkably high fidelity is unknown. Here we report that recognition of the 5′-monophosphate of a long dsRNA substrate by a phosphate-binding pocket in the Dicer-2 PAZ (Piwi, Argonaute, and Zwille/Pinhead) domain is crucial for the length fidelity, but not the efficiency, in 21-nt siRNA production. Loss of the length fidelity, meaning increased length heterogeneity of siRNAs, caused by point mutations in the phosphate-binding pocket of the Dicer-2 PAZ domain decreased RNA silencing activity in vivo, showing the importance of the high fidelity to make 21-nt siRNAs. We propose that the 5′-monophosphate of a long dsRNA substrate is anchored by the phosphate-binding pocket in the Dicer-2 PAZ domain and the distance between the pocket and the RNA cleavage active site in the RNaseIII domain corresponds to the 21-nt pitch in the A-form duplex of a long dsRNA substrate, resulting in high-fidelity 21-nt siRNA production. This study sheds light on the molecular mechanism by which Dicer-2 produces 21-nt siRNAs with a remarkably high fidelity for efficient RNA silencing. PMID:27872309
Emulating short-term synaptic dynamics with memristive devices
NASA Astrophysics Data System (ADS)
Berdan, Radu; Vasilaki, Eleni; Khiat, Ali; Indiveri, Giacomo; Serb, Alexandru; Prodromakis, Themistoklis
2016-01-01
Neuromorphic architectures offer great promise for achieving computation capacities beyond conventional Von Neumann machines. The essential elements for achieving this vision are highly scalable synaptic mimics that do not undermine biological fidelity. Here we demonstrate that single solid-state TiO2 memristors can exhibit non-associative plasticity phenomena observed in biological synapses, supported by their metastable memory state transition properties. We show that, contrary to conventional uses of solid-state memory, the existence of rate-limiting volatility is a key feature for capturing short-term synaptic dynamics. We also show how the temporal dynamics of our prototypes can be exploited to implement spatio-temporal computation, demonstrating the memristors full potential for building biophysically realistic neural processing systems.
Influence of sampling rate on the calculated fidelity of an aircraft simulation
NASA Technical Reports Server (NTRS)
Howard, J. C.
1983-01-01
One of the factors that influences the fidelity of an aircraft digital simulation is the sampling rate. As the sampling rate is increased, the calculated response of the discrete representation tends to coincide with the response of the corresponding continuous system. Because of computer limitations, however, the sampling rate cannot be increased indefinitely. Moreover, real-time simulation requirements demand that a finite sampling rate be adopted. In view of these restrictions, a study was undertaken to determine the influence of sampling rate on the response characteristics of a simulated aircraft describing short-period oscillations. Changes in the calculated response characteristics of the simulated aircraft degrade the fidelity of the simulation. In the present context, fidelity degradation is defined as the percentage change in those characteristics that have the greatest influence on pilot opinion: short period frequency omega, short period damping ratio zeta, and the product omega zeta. To determine the influence of the sampling period on these characteristics, the equations describing the response of a DC-8 aircraft to elevator control inputs were used. The results indicate that if the sampling period is too large, the fidelity of the simulation can be degraded.
Denadai, Rafael; Oshiiwa, Marie; Saad-Hossne, Rogério
2014-01-01
Background: The search for alternative and effective forms of training simulation is needed due to ethical and medico-legal aspects involved in training surgical skills on living patients, human cadavers and living animals. Aims: To evaluate if the bench model fidelity interferes in the acquisition of elliptical excision skills by novice medical students. Materials and Methods: Forty novice medical students were randomly assigned to 5 practice conditions with instructor-directed elliptical excision skills’ training (n = 8): didactic materials (control); organic bench model (low-fidelity); ethylene-vinyl acetate bench model (low-fidelity); chicken legs’ skin bench model (high-fidelity); or pig foot skin bench model (high-fidelity). Pre- and post-tests were applied. Global rating scale, effect size, and self-perceived confidence based on Likert scale were used to evaluate all elliptical excision performances. Results: The analysis showed that after training, the students practicing on bench models had better performance based on Global rating scale (all P < 0.0000) and felt more confident to perform elliptical excision skills (all P < 0.0000) when compared to the control. There was no significant difference (all P > 0.05) between the groups that trained on bench models. The magnitude of the effect (basic cutaneous surgery skills’ training) was considered large (>0.80) in all measurements. Conclusion: The acquisition of elliptical excision skills after instructor-directed training on low-fidelity bench models was similar to the training on high-fidelity bench models; and there was a more substantial increase in elliptical excision performances of students that trained on all simulators compared to the learning on didactic materials. PMID:24700937
Silicon CMOS architecture for a spin-based quantum computer.
Veldhorst, M; Eenink, H G J; Yang, C H; Dzurak, A S
2017-12-15
Recent advances in quantum error correction codes for fault-tolerant quantum computing and physical realizations of high-fidelity qubits in multiple platforms give promise for the construction of a quantum computer based on millions of interacting qubits. However, the classical-quantum interface remains a nascent field of exploration. Here, we propose an architecture for a silicon-based quantum computer processor based on complementary metal-oxide-semiconductor (CMOS) technology. We show how a transistor-based control circuit together with charge-storage electrodes can be used to operate a dense and scalable two-dimensional qubit system. The qubits are defined by the spin state of a single electron confined in quantum dots, coupled via exchange interactions, controlled using a microwave cavity, and measured via gate-based dispersive readout. We implement a spin qubit surface code, showing the prospects for universal quantum computation. We discuss the challenges and focus areas that need to be addressed, providing a path for large-scale quantum computing.
IMAGE: A Design Integration Framework Applied to the High Speed Civil Transport
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.
1993-01-01
Effective design of the High Speed Civil Transport requires the systematic application of design resources throughout a product's life-cycle. Information obtained from the use of these resources is used for the decision-making processes of Concurrent Engineering. Integrated computing environments facilitate the acquisition, organization, and use of required information. State-of-the-art computing technologies provide the basis for the Intelligent Multi-disciplinary Aircraft Generation Environment (IMAGE) described in this paper. IMAGE builds upon existing agent technologies by adding a new component called a model. With the addition of a model, the agent can provide accountable resource utilization in the presence of increasing design fidelity. The development of a zeroth-order agent is used to illustrate agent fundamentals. Using a CATIA(TM)-based agent from previous work, a High Speed Civil Transport visualization system linking CATIA, FLOPS, and ASTROS will be shown. These examples illustrate the important role of the agent technologies used to implement IMAGE, and together they demonstrate that IMAGE can provide an integrated computing environment for the design of the High Speed Civil Transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert
Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less
Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert; ...
2017-07-10
Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less
Use of High Fidelity Methods in Multidisciplinary Optimization-A Preliminary Survey
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)
2002-01-01
Multidisciplinary optimization is a key element of design process. To date multidiscipline optimization methods that use low fidelity methods are well advanced. Optimization methods based on simple linear aerodynamic equations and plate structural equations have been applied to complex aerospace configurations. However, use of high fidelity methods such as the Euler/ Navier-Stokes for fluids and 3-D (three dimensional) finite elements for structures has begun recently. As an activity of Multidiscipline Design Optimization Technical Committee (MDO TC) of AIAA (American Institute of Aeronautics and Astronautics), an effort was initiated to assess the status of the use of high fidelity methods in multidisciplinary optimization. Contributions were solicited through the members MDO TC committee. This paper provides a summary of that survey.
Soft Electronics Enabled Ergonomic Human-Computer Interaction for Swallowing Training
Lee, Yongkuk; Nicholls, Benjamin; Sup Lee, Dong; Chen, Yanfei; Chun, Youngjae; Siang Ang, Chee; Yeo, Woon-Hong
2017-01-01
We introduce a skin-friendly electronic system that enables human-computer interaction (HCI) for swallowing training in dysphagia rehabilitation. For an ergonomic HCI, we utilize a soft, highly compliant (“skin-like”) electrode, which addresses critical issues of an existing rigid and planar electrode combined with a problematic conductive electrolyte and adhesive pad. The skin-like electrode offers a highly conformal, user-comfortable interaction with the skin for long-term wearable, high-fidelity recording of swallowing electromyograms on the chin. Mechanics modeling and experimental quantification captures the ultra-elastic mechanical characteristics of an open mesh microstructured sensor, conjugated with an elastomeric membrane. Systematic in vivo studies investigate the functionality of the soft electronics for HCI-enabled swallowing training, which includes the application of a biofeedback system to detect swallowing behavior. The collection of results demonstrates clinical feasibility of the ergonomic electronics in HCI-driven rehabilitation for patients with swallowing disorders. PMID:28429757
Soft Electronics Enabled Ergonomic Human-Computer Interaction for Swallowing Training
NASA Astrophysics Data System (ADS)
Lee, Yongkuk; Nicholls, Benjamin; Sup Lee, Dong; Chen, Yanfei; Chun, Youngjae; Siang Ang, Chee; Yeo, Woon-Hong
2017-04-01
We introduce a skin-friendly electronic system that enables human-computer interaction (HCI) for swallowing training in dysphagia rehabilitation. For an ergonomic HCI, we utilize a soft, highly compliant (“skin-like”) electrode, which addresses critical issues of an existing rigid and planar electrode combined with a problematic conductive electrolyte and adhesive pad. The skin-like electrode offers a highly conformal, user-comfortable interaction with the skin for long-term wearable, high-fidelity recording of swallowing electromyograms on the chin. Mechanics modeling and experimental quantification captures the ultra-elastic mechanical characteristics of an open mesh microstructured sensor, conjugated with an elastomeric membrane. Systematic in vivo studies investigate the functionality of the soft electronics for HCI-enabled swallowing training, which includes the application of a biofeedback system to detect swallowing behavior. The collection of results demonstrates clinical feasibility of the ergonomic electronics in HCI-driven rehabilitation for patients with swallowing disorders.
In situ single-atom array synthesis using dynamic holographic optical tweezers
Kim, Hyosub; Lee, Woojun; Lee, Han-gyeol; Jo, Hanlae; Song, Yunheung; Ahn, Jaewook
2016-01-01
Establishing a reliable method to form scalable neutral-atom platforms is an essential cornerstone for quantum computation, quantum simulation and quantum many-body physics. Here we demonstrate a real-time transport of single atoms using holographic microtraps controlled by a liquid-crystal spatial light modulator. For this, an analytical design approach to flicker-free microtrap movement is devised and cold rubidium atoms are simultaneously rearranged with 2N motional degrees of freedom, representing unprecedented space controllability. We also accomplish an in situ feedback control for single-atom rearrangements with the high success rate of 99% for up to 10 μm translation. We hope this proof-of-principle demonstration of high-fidelity atom-array preparations will be useful for deterministic loading of N single atoms, especially on arbitrary lattice locations, and also for real-time qubit shuttling in high-dimensional quantum computing architectures. PMID:27796372
Impact of the Columbia Supercomputer on NASA Space and Exploration Mission
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Kwak, Dochan; Kiris, Cetin; Lawrence, Scott
2006-01-01
NASA's 10,240-processor Columbia supercomputer gained worldwide recognition in 2004 for increasing the space agency's computing capability ten-fold, and enabling U.S. scientists and engineers to perform significant, breakthrough simulations. Columbia has amply demonstrated its capability to accelerate NASA's key missions, including space operations, exploration systems, science, and aeronautics. Columbia is part of an integrated high-end computing (HEC) environment comprised of massive storage and archive systems, high-speed networking, high-fidelity modeling and simulation tools, application performance optimization, and advanced data analysis and visualization. In this paper, we illustrate the impact Columbia is having on NASA's numerous space and exploration applications, such as the development of the Crew Exploration and Launch Vehicles (CEV/CLV), effects of long-duration human presence in space, and damage assessment and repair recommendations for remaining shuttle flights. We conclude by discussing HEC challenges that must be overcome to solve space-related science problems in the future.
Graph Representations of Flow and Transport in Fracture Networks using Machine Learning
NASA Astrophysics Data System (ADS)
Srinivasan, G.; Viswanathan, H. S.; Karra, S.; O'Malley, D.; Godinez, H. C.; Hagberg, A.; Osthus, D.; Mohd-Yusof, J.
2017-12-01
Flow and transport of fluids through fractured systems is governed by the properties and interactions at the micro-scale. Retaining information about the micro-structure such as fracture length, orientation, aperture and connectivity in mesh-based computational models results in solving for millions to billions of degrees of freedom and quickly renders the problem computationally intractable. Our approach depicts fracture networks graphically, by mapping fractures to nodes and intersections to edges, thereby greatly reducing computational burden. Additionally, we use machine learning techniques to build simulators on the graph representation, trained on data from the mesh-based high fidelity simulations to speed up computation by orders of magnitude. We demonstrate our methodology on ensembles of discrete fracture networks, dividing up the data into training and validation sets. Our machine learned graph-based solvers result in over 3 orders of magnitude speedup without any significant sacrifice in accuracy.
NASA Technical Reports Server (NTRS)
Ferzali, Wassim; Zacharakis, Vassilis; Upadhyay, Triveni; Weed, Dennis; Burke, Gregory
1995-01-01
The ICAO Aeronautical Mobile Communications Panel (AMCP) completed the drafting of the Aeronautical Mobile Satellite Service (AMSS) Standards and Recommended Practices (SARP's) and the associated Guidance Material and submitted these documents to ICAO Air Navigation Commission (ANC) for ratification in May 1994. This effort, encompassed an extensive, multi-national SARP's validation. As part of this activity, the US Federal Aviation Administration (FAA) sponsored an effort to validate the SARP's via computer simulation. This paper provides a description of this effort. Specifically, it describes: (1) the approach selected for the creation of a high-fidelity AMSS computer model; (2) the test traffic generation scenarios; and (3) the resultant AMSS performance assessment. More recently, the AMSS computer model was also used to provide AMSS performance statistics in support of the RTCA standardization activities. This paper describes this effort as well.
Measuring trainer fidelity in the transfer of suicide prevention training
Cross, Wendi F.; Pisani, Anthony R.; Schmeelk-Cone, Karen; Xia, Yinglin; Tu, Xin; McMahon, Marcie; Munfakh, Jimmie Lou; Gould, Madelyn S.
2014-01-01
Background Finding effective and efficient models to train large numbers of suicide prevention interventionists, including ‘hotline’ crisis counselors, is a high priority. Train-the-trainer (TTT) models are widely used but understudied. Aims To assess the extent to which trainers following TTT delivered the Applied Suicide Intervention Skills Training (ASIST) program with fidelity, and to examine fidelity across two trainings and seven training segments. Methods We recorded and reliably rated trainer fidelity, defined as adherence to program content and competence of program delivery, for 34 newly trained ASIST trainers delivering the program to crisis center staff on two separate occasions. A total of 324 observations were coded. Trainer demographics were also collected. Results On average, trainers delivered two-thirds of the program. Previous training was associated with lower levels of trainer adherence to the program. 18% of trainers' observations were rated as solidly competent. Trainers did not improve fidelity from their first to second training. Significantly higher fidelity was found for lectures and lower fidelity was found for interactive training activities including asking about suicide and creating a safe plan. Conclusions We found wide variability in trainer fidelity to the ASIST program following TTT and few trainers had high levels of both adherence and competence. More research is needed to examine the cost-effectiveness of TTT models. PMID:24901061
A GPU-based incompressible Navier-Stokes solver on moving overset grids
NASA Astrophysics Data System (ADS)
Chandar, Dominic D. J.; Sitaraman, Jayanarayanan; Mavriplis, Dimitri J.
2013-07-01
In pursuit of obtaining high fidelity solutions to the fluid flow equations in a short span of time, graphics processing units (GPUs) which were originally intended for gaming applications are currently being used to accelerate computational fluid dynamics (CFD) codes. With a high peak throughput of about 1 TFLOPS on a PC, GPUs seem to be favourable for many high-resolution computations. One such computation that involves a lot of number crunching is computing time accurate flow solutions past moving bodies. The aim of the present paper is thus to discuss the development of a flow solver on unstructured and overset grids and its implementation on GPUs. In its present form, the flow solver solves the incompressible fluid flow equations on unstructured/hybrid/overset grids using a fully implicit projection method. The resulting discretised equations are solved using a matrix-free Krylov solver using several GPU kernels such as gradient, Laplacian and reduction. Some of the simple arithmetic vector calculations are implemented using the CU++: An Object Oriented Framework for Computational Fluid Dynamics Applications using Graphics Processing Units, Journal of Supercomputing, 2013, doi:10.1007/s11227-013-0985-9 approach where GPU kernels are automatically generated at compile time. Results are presented for two- and three-dimensional computations on static and moving grids.
Lillard, Angeline S
2012-06-01
Research on the outcomes of Montessori education is scarce and results are inconsistent. One possible reason for the inconsistency is variations in Montessori implementation fidelity. To test whether outcomes vary according to implementation fidelity, we examined preschool children enrolled in high fidelity classic Montessori programs, lower fidelity Montessori programs that supplemented the program with conventional school activities, and, for comparison, conventional programs. Children were tested at the start and end of the school year on a range of social and academic skills. Although they performed no better in the fall, children in Classic Montessori programs, as compared with children in Supplemented Montessori and Conventional programs, showed significantly greater school-year gains on outcome measures of executive function, reading, math, vocabulary, and social problem-solving, suggesting that high fidelity Montessori implementation is associated with better outcomes than lower fidelity Montessori programs or conventional programs. Copyright © 2012 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Persing, T. Ray; Bellish, Christine A.; Brandon, Jay; Kenney, P. Sean; Carzoo, Susan; Buttrill, Catherine; Guenther, Arlene
2005-01-01
Several aircraft airframe modeling approaches are currently being used in the DoD community for acquisition, threat evaluation, training, and other purposes. To date there has been no clear empirical study of the impact of airframe simulation fidelity on piloted real-time aircraft simulation study results, or when use of a particular level of fidelity is indicated. This paper documents a series of piloted simulation studies using three different levels of airframe model fidelity. This study was conducted using the NASA Langley Differential Maneuvering Simulator. Evaluations were conducted with three pilots for scenarios requiring extensive maneuvering of the airplanes during air combat. In many cases, a low-fidelity modified point-mass model may be sufficient to evaluate the combat effectiveness of the aircraft. However, in cases where high angle-of-attack flying qualities and aerodynamic performance are a factor or when precision tracking ability of the aircraft must be represented, use of high-fidelity models is indicated.
A Blueprint for Demonstrating Quantum Supremacy with Superconducting Qubits
NASA Technical Reports Server (NTRS)
Kechedzhi, Kostyantyn
2018-01-01
Long coherence times and high fidelity control recently achieved in scalable superconducting circuits paved the way for the growing number of experimental studies of many-qubit quantum coherent phenomena in these devices. Albeit full implementation of quantum error correction and fault tolerant quantum computation remains a challenge the near term pre-error correction devices could allow new fundamental experiments despite inevitable accumulation of errors. One such open question foundational for quantum computing is achieving the so called quantum supremacy, an experimental demonstration of a computational task that takes polynomial time on the quantum computer whereas the best classical algorithm would require exponential time and/or resources. It is possible to formulate such a task for a quantum computer consisting of less than a 100 qubits. The computational task we consider is to provide approximate samples from a non-trivial quantum distribution. This is a generalization for the case of superconducting circuits of ideas behind boson sampling protocol for quantum optics introduced by Arkhipov and Aaronson. In this presentation we discuss a proof-of-principle demonstration of such a sampling task on a 9-qubit chain of superconducting gmon qubits developed by Google. We discuss theoretical analysis of the driven evolution of the device resulting in output approximating samples from a uniform distribution in the Hilbert space, a quantum chaotic state. We analyze quantum chaotic characteristics of the output of the circuit and the time required to generate a sufficiently complex quantum distribution. We demonstrate that the classical simulation of the sampling output requires exponential resources by connecting the task of calculating the output amplitudes to the sign problem of the Quantum Monte Carlo method. We also discuss the detailed theoretical modeling required to achieve high fidelity control and calibration of the multi-qubit unitary evolution in the device. We use a novel cross-entropy statistical metric as a figure of merit to verify the output and calibrate the device controls. Finally, we demonstrate the statistics of the wave function amplitudes generated on the 9-gmon chain and verify the quantum chaotic nature of the generated quantum distribution. This verifies the implementation of the quantum supremacy protocol.
plasmaFoam: An OpenFOAM framework for computational plasma physics and chemistry
NASA Astrophysics Data System (ADS)
Venkattraman, Ayyaswamy; Verma, Abhishek Kumar
2016-09-01
As emphasized in the 2012 Roadmap for low temperature plasmas (LTP), scientific computing has emerged as an essential tool for the investigation and prediction of the fundamental physical and chemical processes associated with these systems. While several in-house and commercial codes exist, with each having its own advantages and disadvantages, a common framework that can be developed by researchers from all over the world will likely accelerate the impact of computational studies on advances in low-temperature plasma physics and chemistry. In this regard, we present a finite volume computational toolbox to perform high-fidelity simulations of LTP systems. This framework, primarily based on the OpenFOAM solver suite, allows us to enhance our understanding of multiscale plasma phenomenon by performing massively parallel, three-dimensional simulations on unstructured meshes using well-established high performance computing tools that are widely used in the computational fluid dynamics community. In this talk, we will present preliminary results obtained using the OpenFOAM-based solver suite with benchmark three-dimensional simulations of microplasma devices including both dielectric and plasma regions. We will also discuss the future outlook for the solver suite.
Comparison of Performance Predictions for New Low-Thrust Trajectory Tools
NASA Technical Reports Server (NTRS)
Polsgrove, Tara; Kos, Larry; Hopkins, Randall; Crane, Tracie
2006-01-01
Several low thrust trajectory optimization tools have been developed over the last 3% years by the Low Thrust Trajectory Tools development team. This toolset includes both low-medium fidelity and high fidelity tools which allow the analyst to quickly research a wide mission trade space and perform advanced mission design. These tools were tested using a set of reference trajectories that exercised each tool s unique capabilities. This paper compares the performance predictions of the various tools against several of the reference trajectories. The intent is to verify agreement between the high fidelity tools and to quantify the performance prediction differences between tools of different fidelity levels.
NASA Astrophysics Data System (ADS)
Hori, Takane; Ichimura, Tsuyoshi; Takahashi, Narumi
2017-04-01
Here we propose a system for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. Although, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting. It is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate interface and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Actually, Ichimura et al. (2015, SC15) has developed unstructured FE non-linear seismic wave simulation code, which achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. Ichimura et al. (2013, GJI) has developed high fidelity FEM simulation code with mesh generator to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. Fujita et al. (2016, SC16) has improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, Errol et al. (2012, BSSA) has developed waveform inversion code for modeling 3D crustal structure, and Agata et al. (2015, AGU Fall Meeting) has improved the high-fidelity FEM code to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. Furthermore, we are developing the methods for forecasting the slip velocity variation on the plate interface. Basic concept is given in Hori et al. (2014, Oceanography) introducing ensemble based sequential data assimilation procedure. Although the prototype described there is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model.
NASA Astrophysics Data System (ADS)
Hori, T.; Agata, R.; Ichimura, T.; Fujita, K.; Yamaguchi, T.; Takahashi, N.
2017-12-01
Recently, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. For construct a system for monitoring and forecasting, it is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate inter-face and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Unstructured FE non-linear seismic wave simulation code has been developed. This achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. A high fidelity FEM simulation code with mesh generator has also been developed to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. This code has been improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, waveform inversion code for modeling 3D crustal structure has been developed, and the high-fidelity FEM code has been improved to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. We are developing the methods for forecasting the slip velocity variation on the plate interface. Although the prototype is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model. Furthermore, large-scale simulation codes for monitoring are being implemented on the GPU clusters and analysis tools are developing to include other functions such as examination in model errors.
Gu, Yuqi; Witter, Tobias; Livingston, Patty; Rao, Purnima; Varshney, Terry; Kuca, Tom; Dylan Bould, M
2017-12-01
As simulator fidelity (i.e., realism) increases from low to high, the simulator more closely resembles the real environment, but it also becomes more expensive. It is generally assumed that the use of high-fidelity simulators results in better learning; however, the effect of fidelity on learning non-technical skills (NTS) is unknown. This was a non-inferiority trial comparing the efficacy of high- vs low-fidelity simulators on learning NTS. Thirty-six postgraduate medical trainees were recruited for the trial. During the pre-test phase, the trainees were randomly assigned to manage a scenario using either a high-fidelity simulator (HFS) or a low-fidelity simulator (LFS), followed by expert debriefing. All trainees then underwent a video recorded post-test scenario on a HFS, and the NTS were assessed between the two groups. The primary outcome was the overall post-test Ottawa Global Rating Scale (OGRS), while controlling for overall pre-test OGRS scores. Non-inferiority between the LFS and HFS was based on a non-inferiority margin of greater than 1. For our primary outcome, the mean (SD) post-test overall OGRS score was not significantly different between the HFS and LFS groups after controlling for pre-test overall OGRS scores [3.8 (0.9) vs 4.0 (0.9), respectively; mean difference, 0.2; 95% confidence interval, -0.4 to 0.8; P = 0.48]. For our secondary outcomes, the post-test total OGRS score was not significantly different between the HFS and LFS groups after controlling for pre-test total OGRS scores (P = 0.33). There were significant improvements in mean overall (P = 0.01) and total (P = 0.003) OGRS scores from pre-test to post-test. There were no significant associations between postgraduate year (P = 0.82) and specialty (P = 0.67) on overall OGRS performance. This study suggests that low-fidelity simulators are non-inferior to the more costly high-fidelity simulators for teaching NTS to postgraduate medical trainees.
Toomey, Elaine; Matthews, James; Hurley, Deirdre A
2017-08-04
Despite an increasing awareness of the importance of fidelity of delivery within complex behaviour change interventions, it is often poorly assessed. This mixed methods study aimed to establish the fidelity of delivery of a complex self-management intervention and explore the reasons for these findings using a convergent/triangulation design. Feasibility trial of the Self-management of Osteoarthritis and Low back pain through Activity and Skills (SOLAS) intervention (ISRCTN49875385), delivered in primary care physiotherapy. 60 SOLAS sessions were delivered across seven sites by nine physiotherapists. Fidelity of delivery of prespecified intervention components was evaluated using (1) audio-recordings (n=60), direct observations (n=24) and self-report checklists (n=60) and (2) individual interviews with physiotherapists (n=9). Quantitatively, fidelity scores were calculated using percentage means and SD of components delivered. Associations between fidelity scores and physiotherapist variables were analysed using Spearman's correlations. Interviews were analysed using thematic analysis to explore potential reasons for fidelity scores. Integration of quantitative and qualitative data occurred at an interpretation level using triangulation. Quantitatively, fidelity scores were high for all assessment methods; with self-report (92.7%) consistently higher than direct observations (82.7%) or audio-recordings (81.7%). There was significant variation between physiotherapists' individual scores (69.8% - 100%). Both qualitative and quantitative data (from physiotherapist variables) found that physiotherapists' knowledge (Spearman's association at p=0.003) and previous experience (p=0.008) were factors that influenced their fidelity. The qualitative data also postulated participant-level (eg, individual needs) and programme-level factors (eg, resources) as additional elements that influenced fidelity. The intervention was delivered with high fidelity. This study contributes to the limited evidence regarding fidelity assessment methods within complex behaviour change interventions. The findings suggest a combination of quantitative methods is suitable for the assessment of fidelity of delivery. A mixed methods approach provided a more insightful understanding of fidelity and its influencing factors. ISRCTN49875385; Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Toomey, Elaine; Matthews, James; Hurley, Deirdre A
2017-01-01
Objectives and design Despite an increasing awareness of the importance of fidelity of delivery within complex behaviour change interventions, it is often poorly assessed. This mixed methods study aimed to establish the fidelity of delivery of a complex self-management intervention and explore the reasons for these findings using a convergent/triangulation design. Setting Feasibility trial of the Self-management of Osteoarthritis and Low back pain through Activity and Skills (SOLAS) intervention (ISRCTN49875385), delivered in primary care physiotherapy. Methods and outcomes 60 SOLAS sessions were delivered across seven sites by nine physiotherapists. Fidelity of delivery of prespecified intervention components was evaluated using (1) audio-recordings (n=60), direct observations (n=24) and self-report checklists (n=60) and (2) individual interviews with physiotherapists (n=9). Quantitatively, fidelity scores were calculated using percentage means and SD of components delivered. Associations between fidelity scores and physiotherapist variables were analysed using Spearman’s correlations. Interviews were analysed using thematic analysis to explore potential reasons for fidelity scores. Integration of quantitative and qualitative data occurred at an interpretation level using triangulation. Results Quantitatively, fidelity scores were high for all assessment methods; with self-report (92.7%) consistently higher than direct observations (82.7%) or audio-recordings (81.7%). There was significant variation between physiotherapists’ individual scores (69.8% - 100%). Both qualitative and quantitative data (from physiotherapist variables) found that physiotherapists’ knowledge (Spearman’s association at p=0.003) and previous experience (p=0.008) were factors that influenced their fidelity. The qualitative data also postulated participant-level (eg, individual needs) and programme-level factors (eg, resources) as additional elements that influenced fidelity. Conclusion The intervention was delivered with high fidelity. This study contributes to the limited evidence regarding fidelity assessment methods within complex behaviour change interventions. The findings suggest a combination of quantitative methods is suitable for the assessment of fidelity of delivery. A mixed methods approach provided a more insightful understanding of fidelity and its influencing factors. Trial registration number ISRCTN49875385; Pre-results. PMID:28780544
High-fidelity gates towards a scalable superconducting quantum processor
NASA Astrophysics Data System (ADS)
Chow, Jerry M.; Corcoles, Antonio D.; Gambetta, Jay M.; Rigetti, Chad; Johnson, Blake R.; Smolin, John A.; Merkel, Seth; Poletto, Stefano; Rozen, Jim; Rothwell, Mary Beth; Keefe, George A.; Ketchen, Mark B.; Steffen, Matthias
2012-02-01
We experimentally explore the implementation of high-fidelity gates on multiple superconducting qubits coupled to multiple resonators. Having demonstrated all-microwave single and two qubit gates with fidelities > 90% on multi-qubit single-resonator systems, we expand the application to qubits across two resonators and investigate qubit coupling in this circuit. The coupled qubit-resonators are building blocks towards two-dimensional lattice networks for the application of surface code quantum error correction algorithms.
Shen, Shuwei; Wang, Haili; Xue, Yue; Yuan, Li; Zhou, Ximing; Zhao, Zuhua; Dong, Erbao; Liu, Bin; Liu, Wendong; Cromeens, Barrett; Adler, Brent; Besner, Gail; Xu, Ronald X
2017-09-08
Preoperative assessment of tissue anatomy and accurate surgical planning is crucial in conjoined twin separation surgery. We developed a new method that combines three-dimensional (3D) printing, assembling, and casting to produce anatomic models of high fidelity for surgical planning. The related anatomic features of the conjoined twins were captured by computed tomography (CT), classified as five organ groups, and reconstructed as five computer models. Among these organ groups, the skeleton was produced by fused deposition modeling (FDM) using acrylonitrile-butadiene-styrene. For the other four organ groups, shell molds were prepared by FDM and cast with silica gel to simulate soft tissues, with contrast enhancement pigments added to simulate different CT and visual contrasts. The produced models were assembled, positioned firmly within a 3D printed shell mold simulating the skin boundary, and cast with transparent silica gel. The produced phantom was subject to further CT scan in comparison with that of the patient data for fidelity evaluation. Further data analysis showed that the produced model reassembled the geometric features of the original CT data with an overall mean deviation of less than 2 mm, indicating the clinical potential to use this method for surgical planning in conjoined twin separation surgery.
Assessing fidelity of delivery of smoking cessation behavioural support in practice.
Lorencatto, Fabiana; West, Robert; Christopherson, Charlotte; Michie, Susan
2013-04-04
Effectiveness of evidence-based behaviour change interventions is likely to be undermined by failure to deliver interventions as planned. Behavioural support for smoking cessation can be a highly cost-effective, life-saving intervention. However, in practice, outcomes are highly variable. Part of this may be due to variability in fidelity of intervention implementation. To date, there have been no published studies on this. The present study aimed to: evaluate a method for assessing fidelity of behavioural support; assess fidelity of delivery in two English Stop-Smoking Services; and compare the extent of fidelity according to session types, duration, individual practitioners, and component behaviour change techniques (BCTs). Treatment manuals and transcripts of 34 audio-recorded behavioural support sessions were obtained from two Stop-Smoking Services and coded into component BCTs using a taxonomy of 43 BCTs. Inter-rater reliability was assessed using percentage agreement. Fidelity was assessed by examining the proportion of BCTs specified in the manuals that were delivered in individual sessions. This was assessed by session type (i.e., pre-quit, quit, post-quit), duration, individual practitioner, and BCT. Inter-coder reliability was high (87.1%). On average, 66% of manual-specified BCTs were delivered per session (SD 15.3, range: 35% to 90%). In Service 1, average fidelity was highest for post-quit sessions (69%) and lowest for pre-quit (58%). In Service 2, fidelity was highest for quit-day (81%) and lowest for post-quit sessions (56%). Session duration was not significantly correlated with fidelity. Individual practitioner fidelity ranged from 55% to 78%. Individual manual-specified BCTs were delivered on average 63% of the time (SD 28.5, range: 0 to 100%). The extent to which smoking cessation behavioural support is delivered as specified in treatment manuals can be reliably assessed using transcripts of audiotaped sessions. This allows the investigation of the implementation of evidence-based practice in relation to smoking cessation, a first step in designing interventions to improve it. There are grounds for believing that fidelity in the English Stop-Smoking Services may be low and that routine monitoring is warranted.
Electromagnetic Modeling of Human Body Using High Performance Computing
NASA Astrophysics Data System (ADS)
Ng, Cho-Kuen; Beall, Mark; Ge, Lixin; Kim, Sanghoek; Klaas, Ottmar; Poon, Ada
Realistic simulation of electromagnetic wave propagation in the actual human body can expedite the investigation of the phenomenon of harvesting implanted devices using wireless powering coupled from external sources. The parallel electromagnetics code suite ACE3P developed at SLAC National Accelerator Laboratory is based on the finite element method for high fidelity accelerator simulation, which can be enhanced to model electromagnetic wave propagation in the human body. Starting with a CAD model of a human phantom that is characterized by a number of tissues, a finite element mesh representing the complex geometries of the individual tissues is built for simulation. Employing an optimal power source with a specific pattern of field distribution, the propagation and focusing of electromagnetic waves in the phantom has been demonstrated. Substantial speedup of the simulation is achieved by using multiple compute cores on supercomputers.
An Analysis of Performance Enhancement Techniques for Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, J. J.; Biswas, R.; Potsdam, M.; Strawn, R. C.; Biegel, Bryan (Technical Monitor)
2002-01-01
The overset grid methodology has significantly reduced time-to-solution of high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement techniques on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machine. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
Machine learning bandgaps of double perovskites
Pilania, G.; Mannodi-Kanakkithodi, A.; Uberuaga, B. P.; ...
2016-01-19
The ability to make rapid and accurate predictions on bandgaps of double perovskites is of much practical interest for a range of applications. While quantum mechanical computations for high-fidelity bandgaps are enormously computation-time intensive and thus impractical in high throughput studies, informatics-based statistical learning approaches can be a promising alternative. Here we demonstrate a systematic feature-engineering approach and a robust learning framework for efficient and accurate predictions of electronic bandgaps of double perovskites. After evaluating a set of more than 1.2 million features, we identify lowest occupied Kohn-Sham levels and elemental electronegativities of the constituent atomic species as the mostmore » crucial and relevant predictors. As a result, the developed models are validated and tested using the best practices of data science and further analyzed to rationalize their prediction performance.« less
The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation
NASA Astrophysics Data System (ADS)
Thoreson, Gregory G.; Schneider, Erich A.
2012-04-01
Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.
Integrated multidisciplinary CAD/CAE environment for micro-electro-mechanical systems (MEMS)
NASA Astrophysics Data System (ADS)
Przekwas, Andrzej J.
1999-03-01
Computational design of MEMS involves several strongly coupled physical disciplines, including fluid mechanics, heat transfer, stress/deformation dynamics, electronics, electro/magneto statics, calorics, biochemistry and others. CFDRC is developing a new generation multi-disciplinary CAD systems for MEMS using high-fidelity field solvers on unstructured, solution-adaptive grids for a full range of disciplines. The software system, ACE + MEMS, includes all essential CAD tools; geometry/grid generation for multi- discipline, multi-equation solvers, GUI, tightly coupled configurable 3D field solvers for FVM, FEM and BEM and a 3D visualization/animation tool. The flow/heat transfer/calorics/chemistry equations are solved with unstructured adaptive FVM solver, stress/deformation are computed with a FEM STRESS solver and a FAST BEM solver is used to solve linear heat transfer, electro/magnetostatics and elastostatics equations on adaptive polygonal surface grids. Tight multidisciplinary coupling and automatic interoperability between the tools was achieved by designing a comprehensive database structure and APIs for complete model definition. The virtual model definition is implemented in data transfer facility, a publicly available tool described in this paper. The paper presents overall description of the software architecture and MEMS design flow in ACE + MEMS. It describes current status, ongoing effort and future plans for the software. The paper also discusses new concepts of mixed-level and mixed- dimensionality capability in which 1D microfluidic networks are simulated concurrently with 3D high-fidelity models of discrete components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
WarpIV: In situ visualization and analysis of ion accelerator simulations
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...
2016-05-09
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
NASA HPCC Technology for Aerospace Analysis and Design
NASA Technical Reports Server (NTRS)
Schulbach, Catherine H.
1999-01-01
The Computational Aerosciences (CAS) Project is part of NASA's High Performance Computing and Communications Program. Its primary goal is to accelerate the availability of high-performance computing technology to the US aerospace community-thus providing the US aerospace community with key tools necessary to reduce design cycle times and increase fidelity in order to improve safety, efficiency and capability of future aerospace vehicles. A complementary goal is to hasten the emergence of a viable commercial market within the aerospace community for the advantage of the domestic computer hardware and software industry. The CAS Project selects representative aerospace problems (especially design) and uses them to focus efforts on advancing aerospace algorithms and applications, systems software, and computing machinery to demonstrate vast improvements in system performance and capability over the life of the program. Recent demonstrations have served to assess the benefits of possible performance improvements while reducing the risk of adopting high-performance computing technology. This talk will discuss past accomplishments in providing technology to the aerospace community, present efforts, and future goals. For example, the times to do full combustor and compressor simulations (of aircraft engines) have been reduced by factors of 320:1 and 400:1 respectively. While this has enabled new capabilities in engine simulation, the goal of an overnight, dynamic, multi-disciplinary, 3-dimensional simulation of an aircraft engine is still years away and will require new generations of high-end technology.
The Development and Deployment of a Virtual Unit Operations Laboratory
ERIC Educational Resources Information Center
Vaidyanath, Sreeram; Williams, Jason; Hilliard, Marcus; Wiesner, Theodore
2007-01-01
Computer-simulated experiments offer many benefits to engineering curricula in the areas of safety, cost, and flexibility. We report our experience in developing and deploying a computer-simulated unit operations laboratory, driven by the guiding principle of maximum fidelity to the physical lab. We find that, while the up-front investment in…
NASA Technical Reports Server (NTRS)
Liever, Peter A.; West, Jeffrey S.
2016-01-01
A hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) modeling framework has been developed for launch vehicle liftoff acoustic environment predictions. The framework couples the existing highly-scalable NASA production CFD code, Loci/CHEM, with a high-order accurate discontinuous Galerkin solver developed in the same production framework, Loci/THRUST, to accurately resolve and propagate acoustic physics across the entire launch environment. Time-accurate, Hybrid RANS/LES CFD modeling is applied for predicting the acoustic generation physics at the plume source, and a high-order accurate unstructured discontinuous Galerkin (DG) method is employed to propagate acoustic waves away from the source across large distances using high-order accurate schemes. The DG solver is capable of solving 2nd, 3rd, and 4th order Euler solutions for non-linear, conservative acoustic field propagation. Initial application testing and validation has been carried out against high resolution acoustic data from the Ares Scale Model Acoustic Test (ASMAT) series to evaluate the capabilities and production readiness of the CFD/CAA system to resolve the observed spectrum of acoustic frequency content. This paper presents results from this validation and outlines efforts to mature and improve the computational simulation framework.
NASA Technical Reports Server (NTRS)
Reuther, James; Jameson, Antony; Alonso, Juan Jose; Rimlinger, Mark J.; Saunders, David
1997-01-01
An aerodynamic shape optimization method that treats the design of complex aircraft configurations subject to high fidelity computational fluid dynamics (CFD), geometric constraints and multiple design points is described. The design process will be greatly accelerated through the use of both control theory and distributed memory computer architectures. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on a higher order CFD method. In order to facilitate the integration of these high fidelity CFD approaches into future multi-disciplinary optimization (NW) applications, new methods must be developed which are capable of simultaneously addressing complex geometries, multiple objective functions, and geometric design constraints. In our earlier studies, we coupled the adjoint based design formulations with unconstrained optimization algorithms and showed that the approach was effective for the aerodynamic design of airfoils, wings, wing-bodies, and complex aircraft configurations. In many of the results presented in these earlier works, geometric constraints were satisfied either by a projection into feasible space or by posing the design space parameterization such that it automatically satisfied constraints. Furthermore, with the exception of reference 9 where the second author initially explored the use of multipoint design in conjunction with adjoint formulations, our earlier works have focused on single point design efforts. Here we demonstrate that the same methodology may be extended to treat complete configuration designs subject to multiple design points and geometric constraints. Examples are presented for both transonic and supersonic configurations ranging from wing alone designs to complex configuration designs involving wing, fuselage, nacelles and pylons.
Fidelity and outcomes in six integrated dual disorders treatment programs.
Chandler, Daniel W
2011-02-01
Fidelity scores and outcomes were measured in six outpatient programs in California which implemented Integrated Dual Disorders Treatment (IDDT). Outcomes were measured for 1 year in four sites and 2 years in two sites; fidelity was assessed at 6 month intervals. Three of the six sites achieved high fidelity (at least a 4 on a 5 point fidelity scale) and three moderate fidelity (at least a 3). Retention in treatment, mental health functioning, stage of substance abuse treatment, abstinence, and psychiatric hospitalization were measured. Outcomes for individual programs were generally positive but not consistent within programs or across programs. Using pooled data in a longitudinal regression model with random effects at person level and adjustment of standard errors for clustering by site, change over time was not statistically significant for the primary outcomes. Fidelity scores had limited association with positive outcomes.
Commentary: Learning from Variations in Fidelity of Implementation
ERIC Educational Resources Information Center
Balu, Rekha; Doolittle, Fred
2016-01-01
The articles in this special issue discuss efforts to improve academic reading outcomes for students and ways to achieve high implementation fidelity of promising strategies. At times the authors discuss if--and how--strong fidelity is associated with strong outcomes and potentially even impacts (the difference between program and control group…
ERIC Educational Resources Information Center
Lillard, Angeline S.
2012-01-01
Research on the outcomes of Montessori education is scarce and results are inconsistent. One possible reason for the inconsistency is variations in Montessori implementation fidelity. To test whether outcomes vary according to implementation fidelity, we examined preschool children enrolled in high fidelity classic Montessori programs, lower…
Uncertainty quantification for PZT bimorph actuators
NASA Astrophysics Data System (ADS)
Bravo, Nikolas; Smith, Ralph C.; Crews, John
2018-03-01
In this paper, we discuss the development of a high fidelity model for a PZT bimorph actuator used for micro-air vehicles, which includes the Robobee. We developed a high-fidelity model for the actuator using the homogenized energy model (HEM) framework, which quantifies the nonlinear, hysteretic, and rate-dependent behavior inherent to PZT in dynamic operating regimes. We then discussed an inverse problem on the model. We included local and global sensitivity analysis of the parameters in the high-fidelity model. Finally, we will discuss the results of Bayesian inference and uncertainty quantification on the HEM.
CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox
NASA Astrophysics Data System (ADS)
Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano
2018-03-01
Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Vinod
2017-05-05
High fidelity computational models of thermocline-based thermal energy storage (TES) were developed. The research goal was to advance the understanding of a single tank nanofludized molten salt based thermocline TES system under various concentration and sizes of the particles suspension. Our objectives were to utilize sensible-heat that operates with least irreversibility by using nanoscale physics. This was achieved by performing computational analysis of several storage designs, analyzing storage efficiency and estimating cost effectiveness for the TES systems under a concentrating solar power (CSP) scheme using molten salt as the storage medium. Since TES is one of the most costly butmore » important components of a CSP plant, an efficient TES system has potential to make the electricity generated from solar technologies cost competitive with conventional sources of electricity.« less
Computational Aerodynamic Analysis of Offshore Upwind and Downwind Turbines
Zhao, Qiuying; Sheng, Chunhua; Afjeh, Abdollah
2014-01-01
Aerodynamic interactions of the model NREL 5 MW offshore horizontal axis wind turbines (HAWT) are investigated using a high-fidelity computational fluid dynamics (CFD) analysis. Four wind turbine configurations are considered; three-bladed upwind and downwind and two-bladed upwind and downwind configurations, which operate at two different rotor speeds of 12.1 and 16 RPM. In the present study, both steady and unsteady aerodynamic loads, such as the rotor torque, blade hub bending moment, and base the tower bending moment of the tower, are evaluated in detail to provide overall assessment of different wind turbine configurations. Aerodynamic interactions between the rotor and tower are analyzed,more » including the rotor wake development downstream. The computational analysis provides insight into aerodynamic performance of the upwind and downwind, two- and three-bladed horizontal axis wind turbines.« less
NASA Astrophysics Data System (ADS)
Wei, Hai-Rui; Deng, Fu-Guo
2014-12-01
Quantum logic gates are the key elements in quantum computing. Here we investigate the possibility of achieving a scalable and compact quantum computing based on stationary electron-spin qubits, by using the giant optical circular birefringence induced by quantum-dot spins in double-sided optical microcavities as a result of cavity quantum electrodynamics. We design the compact quantum circuits for implementing universal and deterministic quantum gates for electron-spin systems, including the two-qubit CNOT gate and the three-qubit Toffoli gate. They are compact and economic, and they do not require additional electron-spin qubits. Moreover, our devices have good scalability and are attractive as they both are based on solid-state quantum systems and the qubits are stationary. They are feasible with the current experimental technology, and both high fidelity and high efficiency can be achieved when the ratio of the side leakage to the cavity decay is low.
Wei, Hai-Rui; Deng, Fu-Guo
2014-12-18
Quantum logic gates are the key elements in quantum computing. Here we investigate the possibility of achieving a scalable and compact quantum computing based on stationary electron-spin qubits, by using the giant optical circular birefringence induced by quantum-dot spins in double-sided optical microcavities as a result of cavity quantum electrodynamics. We design the compact quantum circuits for implementing universal and deterministic quantum gates for electron-spin systems, including the two-qubit CNOT gate and the three-qubit Toffoli gate. They are compact and economic, and they do not require additional electron-spin qubits. Moreover, our devices have good scalability and are attractive as they both are based on solid-state quantum systems and the qubits are stationary. They are feasible with the current experimental technology, and both high fidelity and high efficiency can be achieved when the ratio of the side leakage to the cavity decay is low.
Numerical Simulation of a High-Lift Configuration with Embedded Fluidic Actuators
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Casalino, Damiano; Lin, John C.; Appelbaum, Jason
2014-01-01
Numerical simulations have been performed for a vertical tail configuration with deflected rudder. The suction surface of the main element of this configuration is embedded with an array of 32 fluidic actuators that produce oscillating sweeping jets. Such oscillating jets have been found to be very effective for flow control applications in the past. In the current paper, a high-fidelity computational fluid dynamics (CFD) code known as the PowerFLOW(Registered TradeMark) code is used to simulate the entire flow field associated with this configuration, including the flow inside the actuators. The computed results for the surface pressure and integrated forces compare favorably with measured data. In addition, numerical solutions predict the correct trends in forces with active flow control compared to the no control case. Effect of varying yaw and rudder deflection angles are also presented. In addition, computations have been performed at a higher Reynolds number to assess the performance of fluidic actuators at flight conditions.
NASA Astrophysics Data System (ADS)
Gochis, D. J.; Dugger, A. L.; Karsten, L. R.; Barlage, M. J.; Sampson, K. M.; Yu, W.; Pan, L.; McCreight, J. L.; Howard, K.; Busto, J.; Deems, J. S.
2017-12-01
Hydrometeorological processes vary over comparatively short length scales in regions of complex terrain such as the southern Rocky Mountains. Changes in temperature, precipitation, wind and solar radiation can vary significantly across elevation gradients, terrain landform and land cover conditions throughout the region. Capturing such variability in hydrologic models can necessitate the utilization of so-called `hyper-resolution' spatial meshes with effective element spacings of less than 100m. However, it is often difficult to obtain meteorological forcings of high quality in such regions at those resolutions which can result in significant uncertainty in fundamental in hydrologic model inputs. In this study we examine the comparative influences of meteorological forcing data fidelity and spatial resolution on seasonal simulations of snowpack evolution, runoff and streamflow in a set of high mountain watersheds in southern Colorado. We utilize the operational, NOAA National Water Model configuration of the community WRF-Hydro system as a baseline and compare against it, additional model scenarios with differing specifications of meteorological forcing data, with and without topographic downscaling adjustments applied, with and without experimental high resolution radar derived precipitation estimates and with WRF-Hydro configurations of progressively finer spatial resolution. The results suggest significant influence from and importance of meteorological downscaling techniques in controlling spatial distributions of meltout and runoff timing. The use of radar derived precipitation exhibits clear sensitivity on hydrologic simulation skill compared with the use of coarser resolution, background precipitation analyses. Advantages and disadvantages of the utilization of progressively higher resolution model configurations both in terms of computational requirements and model fidelity are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heroux, Michael; Lethin, Richard
Programming models and environments play the essential roles in high performance computing of enabling the conception, design, implementation and execution of science and engineering application codes. Programmer productivity is strongly influenced by the effectiveness of our programming models and environments, as is software sustainability since our codes have lifespans measured in decades, so the advent of new computing architectures, increased concurrency, concerns for resilience, and the increasing demands for high-fidelity, multi-physics, multi-scale and data-intensive computations mean that we have new challenges to address as part of our fundamental R&D requirements. Fortunately, we also have new tools and environments that makemore » design, prototyping and delivery of new programming models easier than ever. The combination of new and challenging requirements and new, powerful toolsets enables significant synergies for the next generation of programming models and environments R&D. This report presents the topics discussed and results from the 2014 DOE Office of Science Advanced Scientific Computing Research (ASCR) Programming Models & Environments Summit, and subsequent discussions among the summit participants and contributors to topics in this report.« less
Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography
Sidky, Emil Y.; Kraemer, David N.; Roth, Erin G.; Ullberg, Christer; Reiser, Ingrid S.; Pan, Xiaochuan
2014-01-01
Abstract. One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data. PMID:25685824
Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography.
Sidky, Emil Y; Kraemer, David N; Roth, Erin G; Ullberg, Christer; Reiser, Ingrid S; Pan, Xiaochuan
2014-10-03
One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data.
NASA Astrophysics Data System (ADS)
Lynam, Alfred E.
2015-04-01
Multiple-satellite-aided capture is a -efficient technique for capturing a spacecraft into orbit at Jupiter. However, finding the times when the Galilean moons of Jupiter align such that three or four of them can be encountered in a single pass is difficult using standard astrodynamics algorithms such as Lambert's problem. In this paper, we present simple but powerful techniques that simplify the dynamics and geometry of the Galilean satellites so that many of these triple- and quadruple-satellite-aided capture sequences can be found quickly over an extended 60-year time period from 2020 to 2080. The techniques find many low-fidelity trajectories that could be used as initial guesses for future high-fidelity optimization. Results indicate the existence of approximately 3,100 unique triple-satellite-aided capture trajectories and 6 unique quadruple-satellite-aided capture trajectories during the 60-year time period. The entire search takes less than one minute of computational time.
High-fidelity data embedding for image annotation.
He, Shan; Kirovski, Darko; Wu, Min
2009-02-01
High fidelity is a demanding requirement for data hiding, especially for images with artistic or medical value. This correspondence proposes a high-fidelity image watermarking for annotation with robustness to moderate distortion. To achieve the high fidelity of the embedded image, we introduce a visual perception model that aims at quantifying the local tolerance to noise for arbitrary imagery. Based on this model, we embed two kinds of watermarks: a pilot watermark that indicates the existence of the watermark and an information watermark that conveys a payload of several dozen bits. The objective is to embed 32 bits of metadata into a single image in such a way that it is robust to JPEG compression and cropping. We demonstrate the effectiveness of the visual model and the application of the proposed annotation technology using a database of challenging photographic and medical images that contain a large amount of smooth regions.
High Fidelity Tape Transfer Printing Based On Chemically Induced Adhesive Strength Modulation
NASA Astrophysics Data System (ADS)
Sim, Kyoseung; Chen, Song; Li, Yuhang; Kammoun, Mejdi; Peng, Yun; Xu, Minwei; Gao, Yang; Song, Jizhou; Zhang, Yingchun; Ardebili, Haleh; Yu, Cunjiang
2015-11-01
Transfer printing, a two-step process (i.e. picking up and printing) for heterogeneous integration, has been widely exploited for the fabrication of functional electronics system. To ensure a reliable process, strong adhesion for picking up and weak or no adhesion for printing are required. However, it is challenging to meet the requirements of switchable stamp adhesion. Here we introduce a simple, high fidelity process, namely tape transfer printing(TTP), enabled by chemically induced dramatic modulation in tape adhesive strength. We describe the working mechanism of the adhesion modulation that governs this process and demonstrate the method by high fidelity tape transfer printing several types of materials and devices, including Si pellets arrays, photodetector arrays, and electromyography (EMG) sensors, from their preparation substrates to various alien substrates. High fidelity tape transfer printing of components onto curvilinear surfaces is also illustrated.
Takei, Nobuyuki; Yonezawa, Hidehiro; Aoki, Takao; Furusawa, Akira
2005-06-10
We experimentally demonstrate continuous-variable quantum teleportation beyond the no-cloning limit. We teleport a coherent state and achieve the fidelity of 0.70 +/- 0.02 that surpasses the no-cloning limit of 2/3. Surpassing the limit is necessary to transfer the nonclassicality of an input quantum state. By using our high-fidelity teleporter, we demonstrate entanglement swapping, namely, teleportation of quantum entanglement, as an example of transfer of nonclassicality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McInerney, Peter; Adams, Paul; Hadi, Masood Z.
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase
McInerney, Peter; Adams, Paul; Hadi, Masood Z.
2014-01-01
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less
Exploiting current-generation graphics hardware for synthetic-scene generation
NASA Astrophysics Data System (ADS)
Tanner, Michael A.; Keen, Wayne A.
2010-04-01
Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maunz, Peter; Wilhelm, Lukas
Qubits can be encoded in clock states of trapped ions. These states are well isolated from the environment resulting in long coherence times [1] while enabling efficient high-fidelity qubit interactions mediated by the Coulomb coupled motion of the ions in the trap. Quantum states can be prepared with high fidelity and measured efficiently using fluorescence detection. State preparation and detection with 99.93% fidelity have been realized in multiple systems [1,2]. Single qubit gates have been demonstrated below rigorous fault-tolerance thresholds [1,3]. Two qubit gates have been realized with more than 99.9% fidelity [4,5]. Quantum algorithms have been demonstrated on systemsmore » of 5 to 15 qubits [6–8].« less
Driving many distant atoms into high-fidelity steady state entanglement via Lyapunov control.
Li, Chuang; Song, Jie; Xia, Yan; Ding, Weiqiang
2018-01-22
Based on Lyapunov control theory in closed and open systems, we propose a scheme to generate W state of many distant atoms in the cavity-fiber-cavity system. In the closed system, the W state is generated successfully even when the coupling strength between the cavity and fiber is extremely weak. In the presence of atomic spontaneous emission or cavity and fiber decay, the photon-measurement and quantum feedback approaches are proposed to improve the fidelity, which enable efficient generation of high-fidelity W state in the case of large dissipation. Furthermore, the time-optimal Lyapunov control is investigated to shorten the evolution time and improve the fidelity in open systems.
Multi-fidelity machine learning models for accurate bandgap predictions of solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab
Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less
Multi-fidelity machine learning models for accurate bandgap predictions of solids
Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab
2016-12-28
Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less
Al-Ghareeb, Amal Z; Cooper, Simon J
2016-01-01
This integrative review identified, critically appraised and synthesised the existing evidence on the barriers and enablers to using high-fidelity human patient simulator manikins (HPSMs) in undergraduate nursing education. In nursing education, specifically at the undergraduate level, a range of low to high-fidelity simulations have been used as teaching aids. However, nursing educators encounter challenges when introducing new teaching methods or technology, despite the prevalence of high-fidelity HPSMs in nursing education. An integrative review adapted a systematic approach. Medline, CINAHL plus, ERIC, PsychINFO, EMBASE, SCOPUS, Science Direct, Cochrane database, Joanna Brigge Institute, ProQuest, California Simulation Alliance, Simulation Innovative Recourses Center and the search engine Google Scholar were searched. Keywords were selected and specific inclusion/exclusion criteria were applied. The review included all research designs for papers published between 2000 and 2015 that identified the barriers and enablers to using high-fidelity HPSMs in undergraduate nursing education. Studies were appraised using the Critical Appraisal Skills Programme criteria. Thematic analysis was undertaken and emergent themes were extracted. Twenty-one studies were included in the review. These studies adopted quasi-experimental, prospective non-experimental and descriptive designs. Ten barriers were identified, including "lack of time," "fear of technology" and "workload issues." Seven enablers were identified, including "faculty training," "administrative support" and a "dedicated simulation coordinator." Barriers to simulation relate specifically to the complex technologies inherent in high-fidelity HPSMs approaches. Strategic approaches that support up-skilling and provide dedicated technological support may overcome these barriers. Copyright © 2015 Elsevier Ltd. All rights reserved.
A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352
2015-09-01
In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less
Effect of laser pulse shaping parameters on the fidelity of quantum logic gates.
Zaari, Ryan R; Brown, Alex
2012-09-14
The effect of varying parameters specific to laser pulse shaping instruments on resulting fidelities for the ACNOT(1), NOT(2), and Hadamard(2) quantum logic gates are studied for the diatomic molecule (12)C(16)O. These parameters include varying the frequency resolution, adjusting the number of frequency components and also varying the amplitude and phase at each frequency component. A time domain analytic form of the original discretized frequency domain laser pulse function is derived, providing a useful means to infer the resulting pulse shape through variations to the aforementioned parameters. We show that amplitude variation at each frequency component is a crucial requirement for optimal laser pulse shaping, whereas phase variation provides minimal contribution. We also show that high fidelity laser pulses are dependent upon the frequency resolution and increasing the number of frequency components provides only a small incremental improvement to quantum gate fidelity. Analysis through use of the pulse area theorem confirms the resulting population dynamics for one or two frequency high fidelity laser pulses and implies similar dynamics for more complex laser pulse shapes. The ability to produce high fidelity laser pulses that provide both population control and global phase alignment is attributed greatly to the natural evolution phase alignment of the qubits involved within the quantum logic gate operation.
NASA Astrophysics Data System (ADS)
Shegog, Ross; Lazarus, Melanie M.; Murray, Nancy G.; Diamond, Pamela M.; Sessions, Nathalie; Zsigmond, Eva
2012-10-01
The transgenic mouse model is useful for studying the causes and potential cures for human genetic diseases. Exposing high school biology students to laboratory experience in developing transgenic animal models is logistically prohibitive. Computer-based simulation, however, offers this potential in addition to advantages of fidelity and reach. This study describes and evaluates a computer-based simulation to train advanced placement high school science students in laboratory protocols, a transgenic mouse model was produced. A simulation module on preparing a gene construct in the molecular biology lab was evaluated using a randomized clinical control design with advanced placement high school biology students in Mercedes, Texas ( n = 44). Pre-post tests assessed procedural and declarative knowledge, time on task, attitudes toward computers for learning and towards science careers. Students who used the simulation increased their procedural and declarative knowledge regarding molecular biology compared to those in the control condition (both p < 0.005). Significant increases continued to occur with additional use of the simulation ( p < 0.001). Students in the treatment group became more positive toward using computers for learning ( p < 0.001). The simulation did not significantly affect attitudes toward science in general. Computer simulation of complex transgenic protocols have potential to provide a "virtual" laboratory experience as an adjunct to conventional educational approaches.
Computational AeroAcoustics for Fan Noise Prediction
NASA Technical Reports Server (NTRS)
Envia, Ed; Hixon, Ray; Dyson, Rodger; Huff, Dennis (Technical Monitor)
2002-01-01
An overview of the current state-of-the-art in computational aeroacoustics as applied to fan noise prediction at NASA Glenn is presented. Results from recent modeling efforts using three dimensional inviscid formulations in both frequency and time domains are summarized. In particular, the application of a frequency domain method, called LINFLUX, to the computation of rotor-stator interaction tone noise is reviewed and the influence of the background inviscid flow on the acoustic results is analyzed. It has been shown that the noise levels are very sensitive to the gradients of the mean flow near the surface and that the correct computation of these gradients for highly loaded airfoils is especially problematic using an inviscid formulation. The ongoing development of a finite difference time marching code that is based on a sixth order compact scheme is also reviewed. Preliminary results from the nonlinear computation of a gust-airfoil interaction model problem demonstrate the fidelity and accuracy of this approach. Spatial and temporal features of the code as well as its multi-block nature are discussed. Finally, latest results from an ongoing effort in the area of arbitrarily high order methods are reviewed and technical challenges associated with implementing correct high order boundary conditions are discussed and possible strategies for addressing these challenges ore outlined.