Verifying the Simulation Hypothesis via Infinite Nested Universe Simulacrum Loops
NASA Astrophysics Data System (ADS)
Sharma, Vikrant
2017-01-01
The simulation hypothesis proposes that local reality exists as a simulacrum within a hypothetical computer's dimension. More specifically, Bostrom's trilemma proposes that the number of simulations an advanced 'posthuman' civilization could produce makes the proposition very likely. In this paper a hypothetical method to verify the simulation hypothesis is discussed using infinite regression applied to a new type of infinite loop. Assign dimension n to any computer in our present reality, where dimension signifies the hierarchical level in nested simulations our reality exists in. A computer simulating known reality would be dimension (n-1), and likewise a computer simulating an artificial reality, such as a video game, would be dimension (n +1). In this method, among others, four key assumptions are made about the nature of the original computer dimension n. Summations show that regressing such a reality infinitely will create convergence, implying that the verification of whether local reality is a grand simulation is feasible to detect with adequate compute capability. The action of reaching said convergence point halts the simulation of local reality. Sensitivities to the four assumptions and implications are discussed.
Analyzing Interaction Patterns to Verify a Simulation/Game Model
ERIC Educational Resources Information Center
Myers, Rodney Dean
2012-01-01
In order for simulations and games to be effective for learning, instructional designers must verify that the underlying computational models being used have an appropriate degree of fidelity to the conceptual models of their real-world counterparts. A simulation/game that provides incorrect feedback is likely to promote misunderstanding and…
Verifiable fault tolerance in measurement-based quantum computation
NASA Astrophysics Data System (ADS)
Fujii, Keisuke; Hayashi, Masahito
2017-09-01
Quantum systems, in general, cannot be simulated efficiently by a classical computer, and hence are useful for solving certain mathematical problems and simulating quantum many-body systems. This also implies, unfortunately, that verification of the output of the quantum systems is not so trivial, since predicting the output is exponentially hard. As another problem, the quantum system is very delicate for noise and thus needs an error correction. Here, we propose a framework for verification of the output of fault-tolerant quantum computation in a measurement-based model. In contrast to existing analyses on fault tolerance, we do not assume any noise model on the resource state, but an arbitrary resource state is tested by using only single-qubit measurements to verify whether or not the output of measurement-based quantum computation on it is correct. Verifiability is equipped by a constant time repetition of the original measurement-based quantum computation in appropriate measurement bases. Since full characterization of quantum noise is exponentially hard for large-scale quantum computing systems, our framework provides an efficient way to practically verify the experimental quantum error correction.
An Analysis of Instruction-Cached SIMD Computer Architecture
1993-12-01
ASSEBLE SIMULATE SCHEDULE VERIFY :t og ... . .. ... V~JSRUCTONSFOR PECIIEDCOMPARE ASSEMBLEI SIMULATE Ift*U1II ~ ~ SCHEDULEIinw ;. & VERIFY...Cache to Place Blocks ................. 70 4.5.4 Step 4: Schedule Cache Blocks ............................. 70 4.5.5 Step 5: Store Cache Blocks...167 B.4 Scheduler .............................................. 167 B.4.1 Basic Block Definition
NASA Technical Reports Server (NTRS)
Kleb, William L.; Wood, William A.
2004-01-01
The computational simulation community is not routinely publishing independently verifiable tests to accompany new models or algorithms. A survey reveals that only 22% of new models published are accompanied by tests suitable for independently verifying the new model. As the community develops larger codes with increased functionality, and hence increased complexity in terms of the number of building block components and their interactions, it becomes prohibitively expensive for each development group to derive the appropriate tests for each component. Therefore, the computational simulation community is building its collective castle on a very shaky foundation of components with unpublished and unrepeatable verification tests. The computational simulation community needs to begin publishing component level verification tests before the tide of complexity undermines its foundation.
Flight simulation for flight control computer S/N 0104-1 (ASTP)
NASA Technical Reports Server (NTRS)
1975-01-01
Flight control computer (FCC) 0104-I has been designated the prime unit for the SA-210 launch vehicle. The results of the final flight simulation for FCC S/N 0104-I are documented. These results verify satisfactory implementation of the design release and proper interfacing of the FCC with flight-type control sensor elements and simulated thrust vector control system.
A Secure and Verifiable Outsourced Access Control Scheme in Fog-Cloud Computing.
Fan, Kai; Wang, Junxiong; Wang, Xin; Li, Hui; Yang, Yintang
2017-07-24
With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient.
Park, Sung Hwan; Lee, Ji Min; Kim, Jong Shik
2013-01-01
An irregular performance of a mechanical-type constant power regulator is considered. In order to find the cause of an irregular discharge flow at the cut-off pressure area, modeling and numerical simulations are performed to observe dynamic behavior of internal parts of the constant power regulator system for a swashplate-type axial piston pump. The commercial numerical simulation software AMESim is applied to model the mechanical-type regulator with hydraulic pump and simulate the performance of it. The validity of the simulation model of the constant power regulator system is verified by comparing simulation results with experiments. In order to find the cause of the irregular performance of the mechanical-type constant power regulator system, the behavior of main components such as the spool, sleeve, and counterbalance piston is investigated using computer simulation. The shape modification of the counterbalance piston is proposed to improve the undesirable performance of the mechanical-type constant power regulator. The performance improvement is verified by computer simulation using AMESim software.
Design of a massively parallel computer using bit serial processing elements
NASA Technical Reports Server (NTRS)
Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing
1995-01-01
A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.
Description and performance of the Langley differential maneuvering simulator
NASA Technical Reports Server (NTRS)
Ashworth, B. R.; Kahlbaum, W. M., Jr.
1973-01-01
The differential maneuvering simulator for simulating two aircraft or spacecraft operating in a differential mode is described. Tests made to verify that the system could provide the required simulated aircraft motions are given. The mathematical model which converts computed aircraft motions into the required motions of the various projector gimbals is described.
A Secure and Verifiable Outsourced Access Control Scheme in Fog-Cloud Computing
Fan, Kai; Wang, Junxiong; Wang, Xin; Li, Hui; Yang, Yintang
2017-01-01
With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient. PMID:28737733
Programs for Testing Processor-in-Memory Computing Systems
NASA Technical Reports Server (NTRS)
Katz, Daniel S.
2006-01-01
The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.
Automatic mathematical modeling for real time simulation system
NASA Technical Reports Server (NTRS)
Wang, Caroline; Purinton, Steve
1988-01-01
A methodology for automatic mathematical modeling and generating simulation models is described. The models will be verified by running in a test environment using standard profiles with the results compared against known results. The major objective is to create a user friendly environment for engineers to design, maintain, and verify their model and also automatically convert the mathematical model into conventional code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine Simulation. It is written in LISP and MACSYMA and runs on a Symbolic 3670 Lisp Machine. The program provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. It contains an initial set of component process elements for the Space Shuttle Main Engine Simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. The system is then able to automatically generate the model and FORTRAN code. The future goal which is under construction is to download the FORTRAN code to VAX/VMS system for conventional computation. The SSME mathematical model will be verified in a test environment and the solution compared with the real data profile. The use of artificial intelligence techniques has shown that the process of the simulation modeling can be simplified.
Park, Sung Hwan; Lee, Ji Min; Kim, Jong Shik
2013-01-01
An irregular performance of a mechanical-type constant power regulator is considered. In order to find the cause of an irregular discharge flow at the cut-off pressure area, modeling and numerical simulations are performed to observe dynamic behavior of internal parts of the constant power regulator system for a swashplate-type axial piston pump. The commercial numerical simulation software AMESim is applied to model the mechanical-type regulator with hydraulic pump and simulate the performance of it. The validity of the simulation model of the constant power regulator system is verified by comparing simulation results with experiments. In order to find the cause of the irregular performance of the mechanical-type constant power regulator system, the behavior of main components such as the spool, sleeve, and counterbalance piston is investigated using computer simulation. The shape modification of the counterbalance piston is proposed to improve the undesirable performance of the mechanical-type constant power regulator. The performance improvement is verified by computer simulation using AMESim software. PMID:24282389
Two-Level Verification of Data Integrity for Data Storage in Cloud Computing
NASA Astrophysics Data System (ADS)
Xu, Guangwei; Chen, Chunlin; Wang, Hongya; Zang, Zhuping; Pang, Mugen; Jiang, Ping
Data storage in cloud computing can save capital expenditure and relive burden of storage management for users. As the lose or corruption of files stored may happen, many researchers focus on the verification of data integrity. However, massive users often bring large numbers of verifying tasks for the auditor. Moreover, users also need to pay extra fee for these verifying tasks beyond storage fee. Therefore, we propose a two-level verification of data integrity to alleviate these problems. The key idea is to routinely verify the data integrity by users and arbitrate the challenge between the user and cloud provider by the auditor according to the MACs and ϕ values. The extensive performance simulations show that the proposed scheme obviously decreases auditor's verifying tasks and the ratio of wrong arbitration.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Performance of Electric Double-Layer Capacitor Simulators
NASA Astrophysics Data System (ADS)
Funabiki, Shigeyuki; Kodama, Shinsuke; Yamamoto, Masayoshi
This paper proposes a simulator of EDLC, which realizes the performance equivalent to electric double-layer capacitors (EDLCs). The proposed simulator consists of an electrolytic capacitor and a two-quadrant chopper working as a current source. Its operation principle is described in the first place. The voltage dependence of capacitance of EDLCs is taken into account. The performance of the proposed EDLC simulator is verified by computer simulations.
Boundary conditions for simulating large SAW devices using ANSYS.
Peng, Dasong; Yu, Fengqi; Hu, Jian; Li, Peng
2010-08-01
In this report, we propose improved substrate left and right boundary conditions for simulating SAW devices using ANSYS. Compared with the previous methods, the proposed method can greatly reduce computation time. Furthermore, the longer the distance from the first reflector to the last one, the more computation time can be reduced. To verify the proposed method, a design example is presented with device center frequency 971.14 MHz.
NASA Technical Reports Server (NTRS)
Poole, L. R.
1976-01-01
An initial attempt was made to verify the Langley Research Center and Virginia Institute of Marine Science mid-Atlantic continental-shelf wave refraction model. The model was used to simulate refraction occurring during a continental-shelf remote sensing experiment conducted on August 17, 1973. Simulated wave spectra compared favorably, in a qualitative sense, with the experimental spectra. However, it was observed that most of the wave energy resided at frequencies higher than those for which refraction and shoaling effects were predicted, In addition, variations among the experimental spectra were so small that they were not considered statistically significant. In order to verify the refraction model, simulation must be performed in conjunction with a set of significantly varying spectra in which a considerable portion of the total energy resides at frequencies for which refraction and shoaling effects are likely.
Simulation of Coast Guard Vessel Traffic Service Operations by Model and Experiment
DOT National Transportation Integrated Search
1980-09-01
A technique for computer simulation of operations of U.S. Coast Guard Vessel Traffic Services is described and verified with data obtained in four field studies. Uses of the Technique are discussed and illustrated. A field experiment is described in ...
NASA Technical Reports Server (NTRS)
Stevens, N. J.
1979-01-01
Cases where the charged-particle environment acts on the spacecraft (e.g., spacecraft charging phenomena) and cases where a system on the spacecraft causes the interaction (e.g., high voltage space power systems) are considered. Both categories were studied in ground simulation facilities to understand the processes involved and to measure the pertinent parameters. Computer simulations are based on the NASA Charging Analyzer Program (NASCAP) code. Analytical models are developed in this code and verified against the experimental data. Extrapolation from the small test samples to space conditions are made with this code. Typical results from laboratory and computer simulations are presented for both types of interactions. Extrapolations from these simulations to performance in space environments are discussed.
Verifying a computational method for predicting extreme ground motion
Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.
2011-01-01
In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.
Automatic mathematical modeling for real time simulation program (AI application)
NASA Technical Reports Server (NTRS)
Wang, Caroline; Purinton, Steve
1989-01-01
A methodology is described for automatic mathematical modeling and generating simulation models. The major objective was to create a user friendly environment for engineers to design, maintain, and verify their models; to automatically convert the mathematical models into conventional code for computation; and finally, to document the model automatically.
NASA Technical Reports Server (NTRS)
Schulte, Peter Z.; Moore, James W.
2011-01-01
The Crew Exploration Vehicle Parachute Assembly System (CPAS) project conducts computer simulations to verify that flight performance requirements on parachute loads and terminal rate of descent are met. Design of Experiments (DoE) provides a systematic method for variation of simulation input parameters. When implemented and interpreted correctly, a DoE study of parachute simulation tools indicates values and combinations of parameters that may cause requirement limits to be violated. This paper describes one implementation of DoE that is currently being developed by CPAS, explains how DoE results can be interpreted, and presents the results of several preliminary studies. The potential uses of DoE to validate parachute simulation models and verify requirements are also explored.
Semi-physical Simulation Platform of a Parafoil Nonlinear Dynamic System
NASA Astrophysics Data System (ADS)
Gao, Hai-Tao; Yang, Sheng-Bo; Zhu, Er-Lin; Sun, Qing-Lin; Chen, Zeng-Qiang; Kang, Xiao-Feng
2013-11-01
Focusing on the problems in the process of simulation and experiment on a parafoil nonlinear dynamic system, such as limited methods, high cost and low efficiency we present a semi-physical simulation platform. It is designed by connecting parts of physical objects to a computer, and remedies the defect that a computer simulation is divorced from a real environment absolutely. The main components of the platform and its functions, as well as simulation flows, are introduced. The feasibility and validity are verified through a simulation experiment. The experimental results show that the platform has significance for improving the quality of the parafoil fixed-point airdrop system, shortening the development cycle and saving cost.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Computer simulation of multigrid body dynamics and control
NASA Technical Reports Server (NTRS)
Swaminadham, M.; Moon, Young I.; Venkayya, V. B.
1990-01-01
The objective is to set up and analyze benchmark problems on multibody dynamics and to verify the predictions of two multibody computer simulation codes. TREETOPS and DISCOS have been used to run three example problems - one degree-of-freedom spring mass dashpot system, an inverted pendulum system, and a triple pendulum. To study the dynamics and control interaction, an inverted planar pendulum with an external body force and a torsional control spring was modeled as a hinge connected two-rigid body system. TREETOPS and DISCOS affected the time history simulation of this problem. System state space variables and their time derivatives from two simulation codes were compared.
NASA Technical Reports Server (NTRS)
Lindsey, Tony; Pecheur, Charles
2004-01-01
Livingstone PathFinder (LPF) is a simulation-based computer program for verifying autonomous diagnostic software. LPF is designed especially to be applied to NASA s Livingstone computer program, which implements a qualitative-model-based algorithm that diagnoses faults in a complex automated system (e.g., an exploratory robot, spacecraft, or aircraft). LPF forms a software test bed containing a Livingstone diagnosis engine, embedded in a simulated operating environment consisting of a simulator of the system to be diagnosed by Livingstone and a driver program that issues commands and faults according to a nondeterministic scenario provided by the user. LPF runs the test bed through all executions allowed by the scenario, checking for various selectable error conditions after each step. All components of the test bed are instrumented, so that execution can be single-stepped both backward and forward. The architecture of LPF is modular and includes generic interfaces to facilitate substitution of alternative versions of its different parts. Altogether, LPF provides a flexible, extensible framework for simulation-based analysis of diagnostic software; these characteristics also render it amenable to application to diagnostic programs other than Livingstone.
Digital multishaker modal testing
NASA Technical Reports Server (NTRS)
Blair, M.; Craig, R. R., Jr.
1983-01-01
A review of several modal testing techniques is made, along with brief discussions of their advantages and limitations. A new technique is presented which overcomes many of the previous limitations. Several simulated experiments are included to verify the validity and accuracy of the new method. Conclusions are drawn from the simulation studies and recommendations for further work are presented. The complete computer code configured for the simulation study is presented.
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
Performance characteristics of three-phase induction motors
NASA Technical Reports Server (NTRS)
Wood, M. E.
1977-01-01
An investigation into the characteristics of three phase, 400 Hz, induction motors of the general type used on aircraft and spacecraft is summarized. Results of laboratory tests are presented and compared with results from a computer program. Representative motors were both tested and simulated under nominal conditions as well as off nominal conditions of temperature, frequency, voltage magnitude, and voltage balance. Good correlation was achieved between simulated and laboratory results. The primary purpose of the program was to verify the simulation accuracy of the computer program, which in turn will be used as an analytical tool to support the shuttle orbiter.
Computing the apparent centroid of radar targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C.E.
1996-12-31
A high-frequency multibounce radar scattering code was used as a simulation platform for demonstrating an algorithm to compute the ARC of specific radar targets. To illustrate this simulation process, several targets models were used. Simulation results for a sphere model were used to determine the errors of approximation associated with the simulation; verifying the process. The severity of glint induced tracking errors was also illustrated using a model of an F-15 aircraft. It was shown, in a deterministic manner, that the ARC of a target can fall well outside its physical extent. Finally, the apparent radar centroid simulation based onmore » a ray casting procedure is well suited for use on most massively parallel computing platforms and could lead to the development of a near real-time radar tracking simulation for applications such as endgame fuzing, survivability, and vulnerability analyses using specific radar targets and fuze algorithms.« less
NASA Astrophysics Data System (ADS)
Park, Yong Min; Kim, Byeong Hee; Seo, Young Ho
2016-06-01
This paper presents a selective aluminum anodization technique for the fabrication of microstructures covered by nanoscale dome structures. It is possible to fabricate bulging microstructures, utilizing the different growth rates of anodic aluminum oxide in non-uniform electric fields, because the growth rate of anodic aluminum oxide depends on the intensity of electric field, or current density. After anodizing under a non-uniform electric field, bulging microstructures covered by nanostructures were fabricated by removing the residual aluminum layer. The non-uniform electric field induced by insulative micropatterns was estimated by computational simulations and verified experimentally. Utilizing computational simulations, the intensity profile of the electric field was calculated according to the ratio of height and width of the insulative micropatterns. To compare computational simulation results and experimental results, insulative micropatterns were fabricated using SU-8 photoresist. The results verified that the shape of the bottom topology of anodic alumina was strongly dependent on the intensity profile of the applied electric field, or current density. The one-step fabrication of nanostructure-covered microstructures can be applied to various fields, such as nano-biochip and nano-optics, owing to its simplicity and cost effectiveness.
Design Strategy for a Formally Verified Reliable Computing Platform
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Caldwell, James L.; DiVito, Ben L.
1991-01-01
This paper presents a high-level design for a reliable computing platform for real-time control applications. The design tradeoffs and analyses related to the development of a formally verified reliable computing platform are discussed. The design strategy advocated in this paper requires the use of techniques that can be completely characterized mathematically as opposed to more powerful or more flexible algorithms whose performance properties can only be analyzed by simulation and testing. The need for accurate reliability models that can be related to the behavior models is also stressed. Tradeoffs between reliability and voting complexity are explored. In particular, the transient recovery properties of the system are found to be fundamental to both the reliability analysis as well as the "correctness" models.
Beat frequency interference pattern characteristics study
NASA Technical Reports Server (NTRS)
Ott, J. H.; Rice, J. S.
1981-01-01
The frequency spectra and corresponding beat frequencies created by the relative motions between multiple Solar Power Satellites due to solar wind, lunar gravity, etc. were analyzed. The results were derived mathematically and verified through computer simulation. Frequency spectra plots were computer generated. Detailed computations were made for the seven following locations in the continental US: Houston, Tx.; Seattle, Wa.; Miami, Fl.; Chicago, Il.; New York, NY; Los Angeles, Ca.; and Barberton, Oh.
3D Space Radiation Transport in a Shielded ICRU Tissue Sphere
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2014-01-01
A computationally efficient 3DHZETRN code capable of simulating High Charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for a simple homogeneous shield object. Monte Carlo benchmarks were used to verify the methodology in slab and spherical geometry, and the 3D corrections were shown to provide significant improvement over the straight-ahead approximation in some cases. In the present report, the new algorithms with well-defined convergence criteria are extended to inhomogeneous media within a shielded tissue slab and a shielded tissue sphere and tested against Monte Carlo simulation to verify the solution methods. The 3D corrections are again found to more accurately describe the neutron and light ion fluence spectra as compared to the straight-ahead approximation. These computationally efficient methods provide a basis for software capable of space shield analysis and optimization.
Dudley, Peter N; Bonazza, Riccardo; Porter, Warren P
2013-07-01
Animal momentum and heat transfer analysis has historically used direct animal measurements or approximations to calculate drag and heat transfer coefficients. Research can now use modern 3D rendering and computational fluid dynamics software to simulate animal-fluid interactions. Key questions are the level of agreement between simulations and experiments and how superior they are to classical approximations. In this paper we compared experimental and simulated heat transfer and drag calculations on a scale model solid aluminum African elephant casting. We found good agreement between experimental and simulated data and large differences from classical approximations. We used the simulation results to calculate coefficients for heat transfer and drag of the elephant geometry. Copyright © 2013 Wiley Periodicals, Inc.
Computational knee ligament modeling using experimentally determined zero-load lengths.
Bloemker, Katherine H; Guess, Trent M; Maletsky, Lorin; Dodd, Kevin
2012-01-01
This study presents a subject-specific method of determining the zero-load lengths of the cruciate and collateral ligaments in computational knee modeling. Three cadaver knees were tested in a dynamic knee simulator. The cadaver knees also underwent manual envelope of motion testing to find their passive range of motion in order to determine the zero-load lengths for each ligament bundle. Computational multibody knee models were created for each knee and model kinematics were compared to experimental kinematics for a simulated walk cycle. One-dimensional non-linear spring damper elements were used to represent cruciate and collateral ligament bundles in the knee models. This study found that knee kinematics were highly sensitive to altering of the zero-load length. The results also suggest optimal methods for defining each of the ligament bundle zero-load lengths, regardless of the subject. These results verify the importance of the zero-load length when modeling the knee joint and verify that manual envelope of motion measurements can be used to determine the passive range of motion of the knee joint. It is also believed that the method described here for determining zero-load length can be used for in vitro or in vivo subject-specific computational models.
Metrics for comparing dynamic earthquake rupture simulations
Barall, Michael; Harris, Ruth A.
2014-01-01
Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.
Life Span as the Measure of Performance and Learning in a Business Gaming Simulation
ERIC Educational Resources Information Center
Thavikulwat, Precha
2012-01-01
This study applies the learning curve method of measuring learning to participants of a computer-assisted business gaming simulation that includes a multiple-life-cycle feature. The study involved 249 participants. It verified the workability of the feature and estimated the participants' rate of learning at 17.4% for every doubling of experience.…
Statistical error in simulations of Poisson processes: Example of diffusion in solids
NASA Astrophysics Data System (ADS)
Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.
2016-08-01
Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.
NASA Technical Reports Server (NTRS)
Lahoti, G. D.; Akgerman, N.; Altan, T.
1978-01-01
Mild steel (AISI 1018) was selected as model cold-rolling material and Ti-6Al-4V and INCONEL 718 were selected as typical hot-rolling and cold-rolling alloys, respectively. The flow stress and workability of these alloys were characterized and friction factor at the roll/workpiece interface was determined at their respective working conditions by conducting ring tests. Computer-aided mathematical models for predicting metal flow and stresses, and for simulating the shape-rolling process were developed. These models utilize the upper-bound and the slab methods of analysis, and are capable of predicting the lateral spread, roll-separating force, roll torque and local stresses, strains and strain rates. This computer-aided design (CAD) system is also capable of simulating the actual rolling process and thereby designing roll-pass schedule in rolling of an airfoil or similar shape. The predictions from the CAD system were verified with respect to cold rolling of mild steel plates. The system is being applied to cold and hot isothermal rolling of an airfoil shape, and will be verified with respect to laboratory experiments under controlled conditions.
A computational workflow for designing silicon donor qubits
Humble, Travis S.; Ericson, M. Nance; Jakowski, Jacek; ...
2016-09-19
Developing devices that can reliably and accurately demonstrate the principles of superposition and entanglement is an on-going challenge for the quantum computing community. Modeling and simulation offer attractive means of testing early device designs and establishing expectations for operational performance. However, the complex integrated material systems required by quantum device designs are not captured by any single existing computational modeling method. We examine the development and analysis of a multi-staged computational workflow that can be used to design and characterize silicon donor qubit systems with modeling and simulation. Our approach integrates quantum chemistry calculations with electrostatic field solvers to performmore » detailed simulations of a phosphorus dopant in silicon. We show how atomistic details can be synthesized into an operational model for the logical gates that define quantum computation in this particular technology. In conclusion, the resulting computational workflow realizes a design tool for silicon donor qubits that can help verify and validate current and near-term experimental devices.« less
NASA Astrophysics Data System (ADS)
Shang, J. S.; Andrienko, D. A.; Huang, P. G.; Surzhikov, S. T.
2014-06-01
An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical-physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss-Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.
NASA Technical Reports Server (NTRS)
Plankey, B.
1981-01-01
A computer program called ECPVER (Energy Consumption Program - Verification) was developed to simulate all energy loads for any number of buildings. The program computes simulated daily, monthly, and yearly energy consumption which can be compared with actual meter readings for the same time period. Such comparison can lead to validation of the model under a variety of conditions, which allows it to be used to predict future energy saving due to energy conservation measures. Predicted energy saving can then be compared with actual saving to verify the effectiveness of those energy conservation changes. This verification procedure is planned to be an important advancement in the Deep Space Network Energy Project, which seeks to reduce energy cost and consumption at all DSN Deep Space Stations.
Flowfield analysis of helicopter rotor in hover and forward flight based on CFD
NASA Astrophysics Data System (ADS)
Zhao, Qinghe; Li, Xiaodong
2018-05-01
The helicopter rotor field is simulated in hover and forward flight based on Computational Fluid Dynamics(CFD). In hover case only one rotor is simulated with the periodic boundary condition in the rotational coordinate system and the grid is fixed. In the non-lift forward flight case, the total rotor is simulated in inertia coordinate system and the whole grid moves rigidly. The dual-time implicit scheme is applied to simulate the unsteady flowfield on the movement grids. The k – ω turbulence model is employed in order to capture the effects of turbulence. To verify the solver, the flowfield around the Caradonna-Tung rotor is computed. The comparison shows a good agreement between the numerical results and the experimental data.
Quantitative computer simulations of extraterrestrial processing operations
NASA Technical Reports Server (NTRS)
Vincent, T. L.; Nikravesh, P. E.
1989-01-01
The automation of a small, solid propellant mixer was studied. Temperature control is under investigation. A numerical simulation of the system is under development and will be tested using different control options. Control system hardware is currently being put into place. The construction of mathematical models and simulation techniques for understanding various engineering processes is also studied. Computer graphics packages were utilized for better visualization of the simulation results. The mechanical mixing of propellants is examined. Simulation of the mixing process is being done to study how one can control for chaotic behavior to meet specified mixing requirements. An experimental mixing chamber is also being built. It will allow visual tracking of particles under mixing. The experimental unit will be used to test ideas from chaos theory, as well as to verify simulation results. This project has applications to extraterrestrial propellant quality and reliability.
Radio Frequency Mass Gauging of Propellants
NASA Technical Reports Server (NTRS)
Zimmerli, Gregory A.; Vaden, Karl R.; Herlacher, Michael D.; Buchanan, David A.; VanDresar, Neil T.
2007-01-01
A combined experimental and computer simulation effort was conducted to measure radio frequency (RF) tank resonance modes in a dewar partially filled with liquid oxygen, and compare the measurements with numerical simulations. The goal of the effort was to demonstrate that computer simulations of a tank's electromagnetic eigenmodes can be used to accurately predict ground-based measurements, thereby providing a computational tool for predicting tank modes in a low-gravity environment. Matching the measured resonant frequencies of several tank modes with computer simulations can be used to gauge the amount of liquid in a tank, thus providing a possible method to gauge cryogenic propellant tanks in low-gravity. Using a handheld RF spectrum analyzer and a small antenna in a 46 liter capacity dewar for experimental measurements, we have verified that the four lowest transverse magnetic eigenmodes can be accurately predicted as a function of liquid oxygen fill level using computer simulations. The input to the computer simulations consisted of tank dimensions, and the dielectric constant of the fluid. Without using any adjustable parameters, the calculated and measured frequencies agree such that the liquid oxygen fill level was gauged to within 2 percent full scale uncertainty. These results demonstrate the utility of using electromagnetic simulations to form the basis of an RF mass gauging technology with the power to simulate tank resonance frequencies from arbitrary fluid configurations.
Development of an aeroelastic methodology for surface morphing rotors
NASA Astrophysics Data System (ADS)
Cook, James R.
Helicopter performance capabilities are limited by maximum lift characteristics and vibratory loading. In high speed forward flight, dynamic stall and transonic flow greatly increase the amplitude of vibratory loads. Experiments and computational simulations alike have indicated that a variety of active rotor control devices are capable of reducing vibratory loads. For example, periodic blade twist and flap excitation have been optimized to reduce vibratory loads in various rotors. Airfoil geometry can also be modified in order to increase lift coefficient, delay stall, or weaken transonic effects. To explore the potential benefits of active controls, computational methods are being developed for aeroelastic rotor evaluation, including coupling between computational fluid dynamics (CFD) and computational structural dynamics (CSD) solvers. In many contemporary CFD/CSD coupling methods it is assumed that the airfoil is rigid to reduce the interface by single dimension. Some methods retain the conventional one-dimensional beam model while prescribing an airfoil shape to simulate active chord deformation. However, to simulate the actual response of a compliant airfoil it is necessary to include deformations that originate not only from control devices (such as piezoelectric actuators), but also inertial forces, elastic stresses, and aerodynamic pressures. An accurate representation of the physics requires an interaction with a more complete representation of loads and geometry. A CFD/CSD coupling methodology capable of communicating three-dimensional structural deformations and a distribution of aerodynamic forces over the wetted blade surface has not yet been developed. In this research an interface is created within the Fully Unstructured Navier-Stokes (FUN3D) solver that communicates aerodynamic forces on the blade surface to University of Michigan's Nonlinear Active Beam Solver (UM/NLABS -- referred to as NLABS in this thesis). Interface routines are developed for transmission of force and deflection information to achieve an aeroelastic coupling updated at each time step. The method is validated first by comparing the integrated aerodynamic work at CFD and CSD nodes to verify work conservation across the interface. Second, the method is verified by comparing the sectional blade loads and deflections of a rotor in hover and in forward flight with experimental data. Finally, stability analyses for pitch/plunge flutter and camber flutter are performed with comprehensive CSD/low-order-aerodynamics and tightly coupled CFD/CSD simulations and compared to analytical solutions of Peters' thin airfoil theory to verify proper aeroelastic behavior. The effects of simple harmonic camber actuation are examined and compared to the response predicted by Peters' finite-state (F-S) theory. In anticipation of active rotor experiments inside enclosed facilities, computational simulations are performed to evaluate the capability of CFD for accurately simulating flow inside enclosed volumes. A computational methodology for accurately simulating a rotor inside a test chamber is developed to determine the influence of test facility components and turbulence modeling and performance predictions. A number of factors that influence the physical accuracy of the simulation, such as temporal resolution, grid resolution, and aeroelasticity are also evaluated.
Computer model to simulate testing at the National Transonic Facility
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.
1995-01-01
A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.
Free-Swinging Failure Tolerance for Robotic Manipulators
NASA Technical Reports Server (NTRS)
English, James
1997-01-01
Under this GSRP fellowship, software-based failure-tolerance techniques were developed for robotic manipulators. The focus was on failures characterized by the loss of actuator torque at a joint, called free-swinging failures. The research results spanned many aspects of the free-swinging failure-tolerance problem, from preparing for an expected failure to discovery of postfailure capabilities to establishing efficient methods to realize those capabilities. Developed algorithms were verified using computer-based dynamic simulations, and these were further verified using hardware experiments at Johnson Space Center.
A method for the computational modeling of the physics of heart murmurs
NASA Astrophysics Data System (ADS)
Seo, Jung Hee; Bakhshaee, Hani; Garreau, Guillaume; Zhu, Chi; Andreou, Andreas; Thompson, William R.; Mittal, Rajat
2017-05-01
A computational method for direct simulation of the generation and propagation of blood flow induced sounds is proposed. This computational hemoacoustic method is based on the immersed boundary approach and employs high-order finite difference methods to resolve wave propagation and scattering accurately. The current method employs a two-step, one-way coupled approach for the sound generation and its propagation through the tissue. The blood flow is simulated by solving the incompressible Navier-Stokes equations using the sharp-interface immersed boundary method, and the equations corresponding to the generation and propagation of the three-dimensional elastic wave corresponding to the murmur are resolved with a high-order, immersed boundary based, finite-difference methods in the time-domain. The proposed method is applied to a model problem of aortic stenosis murmur and the simulation results are verified and validated by comparing with known solutions as well as experimental measurements. The murmur propagation in a realistic model of a human thorax is also simulated by using the computational method. The roles of hemodynamics and elastic wave propagation on the murmur are discussed based on the simulation results.
NASA Astrophysics Data System (ADS)
Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu
2015-07-01
The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.
Matsuoka, Yu; Shimizu, Kazuyuki
2013-10-20
It is quite important to understand the basic principle embedded in the main metabolism for the interpretation of the fermentation data. For this, it may be useful to understand the regulation mechanism based on systems biology approach. In the present study, we considered the perturbation analysis together with computer simulation based on the models which include the effects of global regulators on the pathway activation for the main metabolism of Escherichia coli. Main focus is the acetate overflow metabolism and the co-fermentation of multiple carbon sources. The perturbation analysis was first made to understand the nature of the feed-forward loop formed by the activation of Pyk by FDP (F1,6BP), and the feed-back loop formed by the inhibition of Pfk by PEP in the glycolysis. Those together with the effect of transcription factor Cra caused by FDP level affected the glycolysis activity. The PTS (phosphotransferase system) acts as the feed-back system by repressing the glucose uptake rate for the increase in the glucose uptake rate. It was also shown that the increased PTS flux (or glucose consumption rate) causes PEP/PYR ratio to be decreased, and EIIA-P, Cya, cAMP-Crp decreased, where cAMP-Crp in turn repressed TCA cycle and more acetate is formed. This was further verified by the detailed computer simulation. In the case of multiple carbon sources such as glucose and xylose, it was shown that the sequential utilization of carbon sources was observed for wild type, while the co-consumption of multiple carbon sources with slow consumption rates were observed for the ptsG mutant by computer simulation, and this was verified by experiments. Moreover, the effect of a specific gene knockout such as Δpyk on the metabolic characteristics was also investigated based on the computer simulation. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kudryavtsev, Alexey N.; Kashkovsky, Alexander V.; Borisov, Semyon P.; Shershnev, Anton A.
2017-10-01
In the present work a computer code RCFS for numerical simulation of chemically reacting compressible flows on hybrid CPU/GPU supercomputers is developed. It solves 3D unsteady Euler equations for multispecies chemically reacting flows in general curvilinear coordinates using shock-capturing TVD schemes. Time advancement is carried out using the explicit Runge-Kutta TVD schemes. Program implementation uses CUDA application programming interface to perform GPU computations. Data between GPUs is distributed via domain decomposition technique. The developed code is verified on the number of test cases including supersonic flow over a cylinder.
NASA Technical Reports Server (NTRS)
Milner, E. J.; Krosel, S. M.
1977-01-01
Techniques are presented for determining the elements of the A, B, C, and D state variable matrices for systems simulated on an EAI Pacer 100 hybrid computer. An automated procedure systematically generates disturbance data necessary to linearize the simulation model and stores these data on a floppy disk. A separate digital program verifies this data, calculates the elements of the system matrices, and prints these matrices appropriately labeled. The partial derivatives forming the elements of the state variable matrices are approximated by finite difference calculations.
NASA Technical Reports Server (NTRS)
Radespiel, Rolf; Hemsch, Michael J.
2007-01-01
The complexity of modern military systems, as well as the cost and difficulty associated with experimentally verifying system and subsystem design makes the use of high-fidelity based simulation a future alternative for design and development. The predictive ability of such simulations such as computational fluid dynamics (CFD) and computational structural mechanics (CSM) have matured significantly. However, for numerical simulations to be used with confidence in design and development, quantitative measures of uncertainty must be available. The AVT 147 Symposium has been established to compile state-of-the art methods of assessing computational uncertainty, to identify future research and development needs associated with these methods, and to present examples of how these needs are being addressed and how the methods are being applied. Papers were solicited that address uncertainty estimation associated with high fidelity, physics-based simulations. The solicitation included papers that identify sources of error and uncertainty in numerical simulation from either the industry perspective or from the disciplinary or cross-disciplinary research perspective. Examples of the industry perspective were to include how computational uncertainty methods are used to reduce system risk in various stages of design or development.
Explicit finite-difference simulation of optical integrated devices on massive parallel computers.
Sterkenburgh, T; Michels, R M; Dress, P; Franke, H
1997-02-20
An explicit method for the numerical simulation of optical integrated circuits by means of the finite-difference time-domain (FDTD) method is presented. This method, based on an explicit solution of Maxwell's equations, is well established in microwave technology. Although the simulation areas are small, we verified the behavior of three interesting problems, especially nonparaxial problems, with typical aspects of integrated optical devices. Because numerical losses are within acceptable limits, we suggest the use of the FDTD method to achieve promising quantitative simulation results.
Computational Knee Ligament Modeling Using Experimentally Determined Zero-Load Lengths
Bloemker, Katherine H; Guess, Trent M; Maletsky, Lorin; Dodd, Kevin
2012-01-01
This study presents a subject-specific method of determining the zero-load lengths of the cruciate and collateral ligaments in computational knee modeling. Three cadaver knees were tested in a dynamic knee simulator. The cadaver knees also underwent manual envelope of motion testing to find their passive range of motion in order to determine the zero-load lengths for each ligament bundle. Computational multibody knee models were created for each knee and model kinematics were compared to experimental kinematics for a simulated walk cycle. One-dimensional non-linear spring damper elements were used to represent cruciate and collateral ligament bundles in the knee models. This study found that knee kinematics were highly sensitive to altering of the zero-load length. The results also suggest optimal methods for defining each of the ligament bundle zero-load lengths, regardless of the subject. These results verify the importance of the zero-load length when modeling the knee joint and verify that manual envelope of motion measurements can be used to determine the passive range of motion of the knee joint. It is also believed that the method described here for determining zero-load length can be used for in vitro or in vivo subject-specific computational models. PMID:22523522
A model for the neural control of pineal periodicity
NASA Astrophysics Data System (ADS)
de Oliveira Cruz, Frederico Alan; Soares, Marilia Amavel Gomes; Cortez, Celia Martins
2016-12-01
The aim of this work was verify if a computational model associating the synchronization dynamics of coupling oscillators to a set of synaptic transmission equations would be able to simulate the control of pineal by a complex neural pathway that connects the retina to this gland. Results from the simulations showed that the frequency and temporal firing patterns were in the range of values found in literature.
Jamming protection of spread spectrum RFID system
NASA Astrophysics Data System (ADS)
Mazurek, Gustaw
2006-10-01
This paper presents a new transform-domain processing algorithm for rejection of narrowband interferences in RFID/DS-CDMA systems. The performance of the proposed algorithm has been verified via computer simulations. Implementation issues have been discussed. The algorithm can be implemented in the FPGA or DSP technology.
Measurement-only verifiable blind quantum computing with quantum input verification
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki
2016-10-01
Verifiable blind quantum computing is a secure delegated quantum computing where a client with a limited quantum technology delegates her quantum computing to a server who has a universal quantum computer. The client's privacy is protected (blindness), and the correctness of the computation is verifiable by the client despite her limited quantum technology (verifiability). There are mainly two types of protocols for verifiable blind quantum computing: the protocol where the client has only to generate single-qubit states and the protocol where the client needs only the ability of single-qubit measurements. The latter is called the measurement-only verifiable blind quantum computing. If the input of the client's quantum computing is a quantum state, whose classical efficient description is not known to the client, there was no way for the measurement-only client to verify the correctness of the input. Here we introduce a protocol of measurement-only verifiable blind quantum computing where the correctness of the quantum input is also verifiable.
Landázuri, Andrea C.; Sáez, A. Eduardo; Anthony, T. Renée
2016-01-01
This work presents fluid flow and particle trajectory simulation studies to determine the aspiration efficiency of a horizontally oriented occupational air sampler using computational fluid dynamics (CFD). Grid adaption and manual scaling of the grids were applied to two sampler prototypes based on a 37-mm cassette. The standard k–ε model was used to simulate the turbulent air flow and a second order streamline-upwind discretization scheme was used to stabilize convective terms of the Navier–Stokes equations. Successively scaled grids for each configuration were created manually and by means of grid adaption using the velocity gradient in the main flow direction. Solutions were verified to assess iterative convergence, grid independence and monotonic convergence. Particle aspiration efficiencies determined for both prototype samplers were undistinguishable, indicating that the porous filter does not play a noticeable role in particle aspiration. Results conclude that grid adaption is a powerful tool that allows to refine specific regions that require lots of detail and therefore better resolve flow detail. It was verified that adaptive grids provided a higher number of locations with monotonic convergence than the manual grids and required the least computational effort. PMID:26949268
Workflow of the Grover algorithm simulation incorporating CUDA and GPGPU
NASA Astrophysics Data System (ADS)
Lu, Xiangwen; Yuan, Jiabin; Zhang, Weiwei
2013-09-01
The Grover quantum search algorithm, one of only a few representative quantum algorithms, can speed up many classical algorithms that use search heuristics. No true quantum computer has yet been developed. For the present, simulation is one effective means of verifying the search algorithm. In this work, we focus on the simulation workflow using a compute unified device architecture (CUDA). Two simulation workflow schemes are proposed. These schemes combine the characteristics of the Grover algorithm and the parallelism of general-purpose computing on graphics processing units (GPGPU). We also analyzed the optimization of memory space and memory access from this perspective. We implemented four programs on CUDA to evaluate the performance of schemes and optimization. Through experimentation, we analyzed the organization of threads suited to Grover algorithm simulations, compared the storage costs of the four programs, and validated the effectiveness of optimization. Experimental results also showed that the distinguished program on CUDA outperformed the serial program of libquantum on a CPU with a speedup of up to 23 times (12 times on average), depending on the scale of the simulation.
Virtual gonio-spectrophotometer for validation of BRDF designs
NASA Astrophysics Data System (ADS)
Mihálik, Andrej; Ďurikovič, Roman
2011-10-01
Measurement of the appearance of an object consists of a group of measurements to characterize the color and surface finish of the object. This group of measurements involves the spectral energy distribution of propagated light measured in terms of reflectance and transmittance, and the spatial energy distribution of that light measured in terms of the bidirectional reflectance distribution function (BRDF). In this article we present the virtual gonio-spectrophotometer, a device that measures flux (power) as a function of illumination and observation. Virtual gonio-spectrophotometer measurements allow the determination of the scattering profile of specimens that can be used to verify the physical characteristics of the computer model used to simulate the scattering profile. Among the characteristics that we verify is the energy conservation of the computer model. A virtual gonio-spectrophotometer is utilized to find the correspondence between industrial measurements obtained from gloss meters and the parameters of a computer reflectance model.
VERIFICATION OF THE HYDROLOGIC EVALUATION OF LANDFILL PERFORMANCE (HELP) MODEL USING FIELD DATA
The report describes a study conducted to verify the Hydrologic Evaluation of Landfill Performance (HELP) computer model using existing field data from a total of 20 landfill cells at 7 sites in the United States. Simulations using the HELP model were run to compare the predicted...
Combining high performance simulation, data acquisition, and graphics display computers
NASA Technical Reports Server (NTRS)
Hickman, Robert J.
1989-01-01
Issues involved in the continuing development of an advanced simulation complex are discussed. This approach provides the capability to perform the majority of tests on advanced systems, non-destructively. The controlled test environments can be replicated to examine the response of the systems under test to alternative treatments of the system control design, or test the function and qualification of specific hardware. Field tests verify that the elements simulated in the laboratories are sufficient. The digital computer is hosted by a Digital Equipment Corp. MicroVAX computer with an Aptec Computer Systems Model 24 I/O computer performing the communication function. An Applied Dynamics International AD100 performs the high speed simulation computing and an Evans and Sutherland PS350 performs on-line graphics display. A Scientific Computer Systems SCS40 acts as a high performance FORTRAN program processor to support the complex, by generating numerous large files from programs coded in FORTRAN that are required for the real time processing. Four programming languages are involved in the process, FORTRAN, ADSIM, ADRIO, and STAPLE. FORTRAN is employed on the MicroVAX host to initialize and terminate the simulation runs on the system. The generation of the data files on the SCS40 also is performed with FORTRAN programs. ADSIM and ADIRO are used to program the processing elements of the AD100 and its IOCP processor. STAPLE is used to program the Aptec DIP and DIA processors.
Mathematic models for a ray tracing method and its applications in wireless optical communications.
Zhang, Minglun; Zhang, Yangan; Yuan, Xueguang; Zhang, Jinnan
2010-08-16
This paper presents a new ray tracing method, which contains a whole set of mathematic models, and its validity is verified by simulations. In addition, both theoretical analysis and simulation results show that the computational complexity of the method is much lower than that of previous ones. Therefore, the method can be used to rapidly calculate the impulse response of wireless optical channels for complicated systems.
Free-Swinging Failure Tolerance for Robotic Manipulators. Degree awarded by Purdue Univ.
NASA Technical Reports Server (NTRS)
English, James
1997-01-01
Under this GSRP fellowship, software-based failure-tolerance techniques were developed for robotic manipulators. The focus was on failures characterized by the loss of actuator torque at a joint, called free-swinging failures. The research results spanned many aspects of the free-swinging failure-tolerance problem, from preparing for an expected failure to discovery of postfailure capabilities to establishing efficient methods to realize those capabilities. Developed algorithms were verified using computer-based dynamic simulations, and these were further verified using hardware experiments at Johnson Space Center.
BACT Simulation User Guide (Version 7.0)
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1997-01-01
This report documents the structure and operation of a simulation model of the Benchmark Active Control Technology (BACT) Wind-Tunnel Model. The BACT system was designed, built, and tested at NASA Langley Research Center as part of the Benchmark Models Program and was developed to perform wind-tunnel experiments to obtain benchmark quality data to validate computational fluid dynamics and computational aeroelasticity codes, to verify the accuracy of current aeroservoelasticity design and analysis tools, and to provide an active controls testbed for evaluating new and innovative control algorithms for flutter suppression and gust load alleviation. The BACT system has been especially valuable as a control system testbed.
Noise Radiation From a Leading-Edge Slat
NASA Technical Reports Server (NTRS)
Lockard, David P.; Choudhari, Meelan M.
2009-01-01
This paper extends our previous computations of unsteady flow within the slat cove region of a multi-element high-lift airfoil configuration, which showed that both statistical and structural aspects of the experimentally observed unsteady flow behavior can be captured via 3D simulations over a computational domain of narrow spanwise extent. Although such narrow domain simulation can account for the spanwise decorrelation of the slat cove fluctuations, the resulting database cannot be applied towards acoustic predictions of the slat without invoking additional approximations to synthesize the fluctuation field over the rest of the span. This deficiency is partially alleviated in the present work by increasing the spanwise extent of the computational domain from 37.3% of the slat chord to nearly 226% (i.e., 15% of the model span). The simulation database is used to verify consistency with previous computational results and, then, to develop predictions of the far-field noise radiation in conjunction with a frequency-domain Ffowcs-Williams Hawkings solver.
Höfler, K; Schwarzer, S
2000-06-01
Building on an idea of Fogelson and Peskin [J. Comput. Phys. 79, 50 (1988)] we describe the implementation and verification of a simulation technique for systems of non-Brownian particles in fluids at Reynolds numbers up to about 20 on the particle scale. This direct simulation technique fills a gap between simulations in the viscous regime and high-Reynolds-number modeling. It also combines sufficient computational accuracy with numerical efficiency and allows studies of several thousand, in principle arbitrarily shaped, extended and hydrodynamically interacting particles on regular work stations. We verify the algorithm in two and three dimensions for (i) single falling particles and (ii) a fluid flowing through a bed of fixed spheres. In the context of sedimentation we compute the volume fraction dependence of the mean sedimentation velocity. The results are compared with experimental and other numerical results both in the viscous and inertial regime and we find very satisfactory agreement.
A perspective on future directions in aerospace propulsion system simulation
NASA Technical Reports Server (NTRS)
Miller, Brent A.; Szuch, John R.; Gaugler, Raymond E.; Wood, Jerry R.
1989-01-01
The design and development of aircraft engines is a lengthy and costly process using today's methodology. This is due, in large measure, to the fact that present methods rely heavily on experimental testing to verify the operability, performance, and structural integrity of components and systems. The potential exists for achieving significant speedups in the propulsion development process through increased use of computational techniques for simulation, analysis, and optimization. This paper outlines the concept and technology requirements for a Numerical Propulsion Simulation System (NPSS) that would provide capabilities to do interactive, multidisciplinary simulations of complete propulsion systems. By combining high performance computing hardware and software with state-of-the-art propulsion system models, the NPSS will permit the rapid calculation, assessment, and optimization of subcomponent, component, and system performance, durability, reliability and weight-before committing to building hardware.
Simulation Speed Analysis and Improvements of Modelica Models for Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorissen, Filip; Wetter, Michael; Helsen, Lieve
This paper presents an approach for speeding up Modelica models. Insight is provided into how Modelica models are solved and what determines the tool’s computational speed. Aspects such as algebraic loops, code efficiency and integrator choice are discussed. This is illustrated using simple building simulation examples and Dymola. The generality of the work is in some cases verified using OpenModelica. Using this approach, a medium sized office building including building envelope, heating ventilation and air conditioning (HVAC) systems and control strategy can be simulated at a speed five hundred times faster than real time.
NASA Astrophysics Data System (ADS)
Li, Mingchao; Han, Shuai; Zhou, Sibao; Zhang, Ye
2018-06-01
Based on a 3D model of a discrete fracture network (DFN) in a rock mass, an improved projective method for computing the 3D mechanical connectivity rate was proposed. The Monte Carlo simulation method, 2D Poisson process and 3D geological modeling technique were integrated into a polyhedral DFN modeling approach, and the simulation results were verified by numerical tests and graphical inspection. Next, the traditional projective approach for calculating the rock mass connectivity rate was improved using the 3D DFN models by (1) using the polyhedral model to replace the Baecher disk model; (2) taking the real cross section of the rock mass, rather than a part of the cross section, as the test plane; and (3) dynamically searching the joint connectivity rates using different dip directions and dip angles at different elevations to calculate the maximum, minimum and average values of the joint connectivity at each elevation. In a case study, the improved method and traditional method were used to compute the mechanical connectivity rate of the slope of a dam abutment. The results of the two methods were further used to compute the cohesive force of the rock masses. Finally, a comparison showed that the cohesive force derived from the traditional method had a higher error, whereas the cohesive force derived from the improved method was consistent with the suggested values. According to the comparison, the effectivity and validity of the improved method were verified indirectly.
GPU-based prompt gamma ray imaging from boron neutron capture therapy.
Yoon, Do-Kun; Jung, Joo-Young; Jo Hong, Key; Sil Lee, Keum; Suk Suh, Tae
2015-01-01
The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU). Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.
2015-01-01
Purpose: The aim of this study was to validate a computational fluid dynamics (CFD) simulation of flow-diverter treatment through Doppler ultrasonography measurements in patient-specific models of intracranial bifurcation and side-wall aneurysms. Methods: Computational and physical models of patient-specific bifurcation and sidewall aneurysms were constructed from computed tomography angiography with use of stereolithography, a three-dimensional printing technology. Flow dynamics parameters before and after flow-diverter treatment were measured with pulse-wave and color Doppler ultrasonography, and then compared with CFD simulations. Results: CFD simulations showed drastic flow reduction after flow-diverter treatment in both aneurysms. The mean volume flow rate decreased by 90% and 85% for the bifurcation aneurysm and the side-wall aneurysm, respectively. Velocity contour plots from computer simulations before and after flow diversion closely resembled the patterns obtained by color Doppler ultrasonography. Conclusion: The CFD estimation of flow reduction in aneurysms treated with a flow-diverting stent was verified by Doppler ultrasonography in patient-specific phantom models of bifurcation and side-wall aneurysms. The combination of CFD and ultrasonography may constitute a feasible and reliable technique in studying the treatment of intracranial aneurysms with flow-diverting stents. PMID:25754367
Solar and Heliospheric Observatory (SOHO) Flight Dynamics Simulations Using MATLAB (R)
NASA Technical Reports Server (NTRS)
Headrick, R. D.; Rowe, J. N.
1996-01-01
This paper describes a study to verify onboard attitude control laws in the coarse Sun-pointing (CSP) mode by simulation and to develop procedures for operational support for the Solar and Heliospheric Observatory (SOHO) mission. SOHO was launched on December 2, 1995, and the predictions of the simulation were verified with the flight data. This study used a commercial off the shelf product MATLAB(tm) to do the following: Develop procedures for computing the parasitic torques for orbital maneuvers; Simulate onboard attitude control of roll, pitch, and yaw during orbital maneuvers; Develop procedures for predicting firing time for both on- and off-modulated thrusters during orbital maneuvers; Investigate the use of feed forward or pre-bias torques to reduce the attitude handoff during orbit maneuvers - in particular, determine how to use the flight data to improve the feed forward torque estimates for use on future maneuvers. The study verified the stability of the attitude control during orbital maneuvers and the proposed use of feed forward torques to compensate for the attitude handoff. Comparison of the simulations with flight data showed: Parasitic torques provided a good estimate of the on- and off-modulation for attitude control; The feed forward torque compensation scheme worked well to reduce attitude handoff during the orbital maneuvers. The work has been extended to prototype calibration of thrusters from observed firing time and observed reaction wheel speed changes.
Surgical robot setup simulation with consistent kinematics and haptics for abdominal surgery.
Hayashibe, Mitsuhiro; Suzuki, Naoki; Hattori, Asaki; Suzuki, Shigeyuki; Konishi, Kozo; Kakeji, Yoshihiro; Hashizume, Makoto
2005-01-01
Preoperative simulation and planning of surgical robot setup should accompany advanced robotic surgery if their advantages are to be further pursued. Feedback from the planning system will plays an essential role in computer-aided robotic surgery in addition to preoperative detailed geometric information from patient CT/MRI images. Surgical robot setup simulation systems for appropriate trocar site placement have been developed especially for abdominal surgery. The motion of the surgical robot can be simulated and rehearsed with kinematic constraints at the trocar site, and the inverse-kinematics of the robot. Results from simulation using clinical patient data verify the effectiveness of the proposed system.
NASA Astrophysics Data System (ADS)
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
Third-order accurate conservative method on unstructured meshes for gasdynamic simulations
NASA Astrophysics Data System (ADS)
Shirobokov, D. A.
2017-04-01
A third-order accurate finite-volume method on unstructured meshes is proposed for solving viscous gasdynamic problems. The method is described as applied to the advection equation. The accuracy of the method is verified by computing the evolution of a vortex on meshes of various degrees of detail with variously shaped cells. Additionally, unsteady flows around a cylinder and a symmetric airfoil are computed. The numerical results are presented in the form of plots and tables.
1985-04-01
and equipment whose operation can be verified with a visual or aural check. The sequence of outputs shall be cyclic, with provisions to stop the...private memory. The decision to provide spare, expansion capability, or a combination of both shall be based on life cycle cost (to the best extent...Computational System should be determined in conjunction with a computer expert (if possible). In any event, it is best to postpone completing - this
GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.
Liu, Yangchuan; Tang, Yuguo; Gao, Xin
2017-12-01
The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.
Rotating shell eggs immersed in hot water for the purpose of pasteurization
USDA-ARS?s Scientific Manuscript database
Pasteurization of shell eggs for inactivation of Salmonella using hot water immersion can be used to improve their safety. The rotation of a shell egg immersed in hot water has previously been simulated by computational fluid dynamics (CFD); however, experimental data to verify the results do not ex...
Capturing, using, and managing quality assurance knowledge for shuttle post-MECO flight design
NASA Technical Reports Server (NTRS)
Peters, H. L.; Fussell, L. R.; Goodwin, M. A.; Schultz, Roger D.
1991-01-01
Ascent initialization values used by the Shuttle's onboard computer for nominal and abort mission scenarios are verified by a six degrees of freedom computer simulation. The procedure that the Ascent Post Main Engine Cutoff (Post-MECO) group uses to perform quality assurance (QA) of the simulation is time consuming. Also, the QA data, checklists and associated rationale, though known by the group members, is not sufficiently documented, hindering transfer of knowledge and problem resolution. A new QA procedure which retains the current high level of integrity while reducing the time required to perform QA is needed to support the increasing Shuttle flight rate. Documenting the knowledge is also needed to increase its availability for training and problem resolution. To meet these needs, a knowledge capture process, embedded into the group activities, was initiated to verify the existing QA checks, define new ones, and document all rationale. The resulting checks were automated in a conventional software program to achieve the desired standardization, integrity, and time reduction. A prototype electronic knowledge base was developed with Macintosh's HyperCard to serve as a knowledge capture tool and data storage.
Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method
NASA Technical Reports Server (NTRS)
Boyd, Iain D.
1991-01-01
A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.
NASA Technical Reports Server (NTRS)
Matsuda, Y.
1974-01-01
A low-noise plasma simulation model is developed and applied to a series of linear and nonlinear problems associated with electrostatic wave propagation in a one-dimensional, collisionless, Maxwellian plasma, in the absence of magnetic field. It is demonstrated that use of the hybrid simulation model allows economical studies to be carried out in both the linear and nonlinear regimes with better quantitative results, for comparable computing time, than can be obtained by conventional particle simulation models, or direct solution of the Vlasov equation. The characteristics of the hybrid simulation model itself are first investigated, and it is shown to be capable of verifying the theoretical linear dispersion relation at wave energy levels as low as .000001 of the plasma thermal energy. Having established the validity of the hybrid simulation model, it is then used to study the nonlinear dynamics of monochromatic wave, sideband instability due to trapped particles, and satellite growth.
Putzer, David; Moctezuma, Jose Luis; Nogler, Michael
2017-11-01
An increasing number of orthopaedic surgeons are using computer aided planning tools for bone removal applications. The aim of the study was to consolidate a set of generic functions to be used for a 3D computer assisted planning or simulation. A limited subset of 30 surgical procedures was analyzed and verified in 243 surgical procedures of a surgical atlas. Fourteen generic functions to be used in 3D computer assisted planning and simulations were extracted. Our results showed that the average procedure comprises 14 ± 10 (SD) steps with ten different generic planning steps and four generic bone removal steps. In conclusion, the study shows that with a limited number of 14 planning functions it is possible to perform 243 surgical procedures out of Campbell's Operative Orthopedics atlas. The results may be used as a basis for versatile generic intraoperative planning software.
An immersed boundary method for modeling a dirty geometry data
NASA Astrophysics Data System (ADS)
Onishi, Keiji; Tsubokura, Makoto
2017-11-01
We present a robust, fast, and low preparation cost immersed boundary method (IBM) for simulating an incompressible high Re flow around highly complex geometries. The method is achieved by the dispersion of the momentum by the axial linear projection and the approximate domain assumption satisfying the mass conservation around the wall including cells. This methodology has been verified against an analytical theory and wind tunnel experiment data. Next, we simulate the problem of flow around a rotating object and demonstrate the ability of this methodology to the moving geometry problem. This methodology provides the possibility as a method for obtaining a quick solution at a next large scale supercomputer. This research was supported by MEXT as ``Priority Issue on Post-K computer'' (Development of innovative design and production processes) and used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science.
Simulation of charge exchange plasma propagation near an ion thruster propelled spacecraft
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Kaufman, H. R.; Winder, D. R.
1981-01-01
A model describing the charge exchange plasma and its propagation is discussed, along with a computer code based on the model. The geometry of an idealized spacecraft having an ion thruster is outlined, with attention given to the assumptions used in modeling the ion beam. Also presented is the distribution function describing charge exchange production. The barometric equation is used in relating the variation in plasma potential to the variation in plasma density. The numerical methods and approximations employed in the calculations are discussed, and comparisons are made between the computer simulation and experimental data. An analytical solution of a simple configuration is also used in verifying the model.
Implemented a wireless communication system for VGA capsule endoscope.
Moon, Yeon-Kwan; Lee, Jyung Hyun; Park, Hee-Joon; Cho, Jin-Ho; Choi, Hyun-Chul
2014-01-01
Recently, several medical devices that use wireless communication are under development. In this paper, the small size frequency shift keying (FSK) transmitter and a monofilar antenna for the capsule endoscope, enabling the medical device to transmit VGA-size images of the intestine. To verify the functionality of the proposed wireless communication system, computer simulations and animal experiments were performed with the implemented capsule endoscope that includes the proposed wireless communication system. Several fundamental experiments are carried out using the implemented transmitter and antenna, and animal in-vivo experiments were performed to verify VGA image transmission.
Simulation of quantum dynamics based on the quantum stochastic differential equation.
Li, Ming
2013-01-01
The quantum stochastic differential equation derived from the Lindblad form quantum master equation is investigated. The general formulation in terms of environment operators representing the quantum state diffusion is given. The numerical simulation algorithm of stochastic process of direct photodetection of a driven two-level system for the predictions of the dynamical behavior is proposed. The effectiveness and superiority of the algorithm are verified by the performance analysis of the accuracy and the computational cost in comparison with the classical Runge-Kutta algorithm.
Proton Straggling in Thick Silicon Detectors
NASA Technical Reports Server (NTRS)
Selesnick, R. S.; Baker, D. N.; Kanekal, S. G.
2017-01-01
Straggling functions for protons in thick silicon radiation detectors are computed by Monte Carlo simulation. Mean energy loss is constrained by the silicon stopping power, providing higher straggling at low energy and probabilities for stopping within the detector volume. By matching the first four moments of simulated energy-loss distributions, straggling functions are approximated by a log-normal distribution that is accurate for Vavilov k is greater than or equal to 0:3. They are verified by comparison to experimental proton data from a charged particle telescope.
Numerical Field Model Simulation of Full Scale Fire Tests in a Closed Spherical/Cylindrical Vessel.
1987-12-01
the behavior of an actual fire on board a ship. The computer model will be verified by the experimental data obtained in Fire-l. It is important to... behavior in simulations where convection is important. The upwind differencing scheme takes into account the unsymmetrical phenomenon of convection by using...TANK CELL ON THE NORTH SIDE) FOR A * * PARTICULAR FIRE CELL * * COSUMS (I,J) = THE ARRAY TO STORE THE SIMILIAR VALUE FOR THE FIRE * * CELL TO THE SOUTH
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
NASA Technical Reports Server (NTRS)
1999-01-01
The Hubble Space Telescope (HST) team is preparing for NASA's third scheduled service call to Hubble. This mission, STS-103, will launch from Kennedy Space Center aboard the Space Shuttle Discovery. The seven flight crew members are Commander Curtis L. Brown, Pilot Scott J. Kelly, European Space Agency (ESA) astronaut Jean-Francois Clervoy who will join space walkers Steven L. Smith, C. Michael Foale, John M. Grunsfeld, and ESA astronaut Claude Nicollier. The objectives of the HST Third Servicing Mission (SM3A) are to replace the telescope's six gyroscopes, a Fine-Guidance Sensor, an S-Band Single Access Transmitter, a spare solid-state recorder and a high-voltage/temperature kit for protecting the batteries from overheating. In addition, the crew plans to install an advanced computer that is 20 times faster and has six times the memory of the current Hubble Space Telescope computer. To prepare for these extravehicular activities (EVAs), the SM3A astronauts participated in Crew Familiarization sessions with the actual SM3A flight hardware. During these sessions the crew spent long hours rehearsing their space walks in the Guidance Navigation Simulator and NBL (Neutral Buoyancy Laboratory). Using space gloves, flight Space Support Equipment (SSE), and Crew Aids and Tools (CATs), the astronauts trained with and verified flight orbital replacement unit (ORU) hardware. The crew worked with a number of trainers and simulators, such as the High Fidelity Mechanical Simulator, Guidance Navigation Simulator, System Engineering Simulator, the Aft Shroud Door Trainer, the Forward Shell/Light Shield Simulator, and the Support Systems Module Bay Doors Simulator. They also trained and verified the flight Orbital Replacement Unit Carrier (ORUC) and its ancillary hardware. Discovery's planned 10-day flight is scheduled to end with a night landing at Kennedy.
NASA Technical Reports Server (NTRS)
Aretskin-Hariton, Eliot D.; Zinnecker, Alicia Mae; Culley, Dennis E.
2014-01-01
Distributed Engine Control (DEC) is an enabling technology that has the potential to advance the state-of-the-art in gas turbine engine control. To analyze the capabilities that DEC offers, a Hardware-In-the-Loop (HIL) test bed is being developed at NASA Glenn Research Center. This test bed will support a systems-level analysis of control capabilities in closed-loop engine simulations. The structure of the HIL emulates a virtual test cell by implementing the operator functions, control system, and engine on three separate computers. This implementation increases the flexibility and extensibility of the HIL. Here, a method is discussed for implementing these interfaces by connecting the three platforms over a dedicated Local Area Network (LAN). This approach is verified using the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k), which is typically implemented on one computer. There are marginal differences between the results from simulation of the typical and the three-computer implementation. Additional analysis of the LAN network, including characterization of network load, packet drop, and latency, is presented. The three-computer setup supports the incorporation of complex control models and proprietary engine models into the HIL framework.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2012-01-01
This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.
Chiastra, Claudio; Wu, Wei; Dickerhoff, Benjamin; Aleiou, Ali; Dubini, Gabriele; Otake, Hiromasa; Migliavacca, Francesco; LaDisa, John F
2016-07-26
The optimal stenting technique for coronary artery bifurcations is still debated. With additional advances computational simulations can soon be used to compare stent designs or strategies based on verified structural and hemodynamics results in order to identify the optimal solution for each individual's anatomy. In this study, patient-specific simulations of stent deployment were performed for 2 cases to replicate the complete procedure conducted by interventional cardiologists. Subsequent computational fluid dynamics (CFD) analyses were conducted to quantify hemodynamic quantities linked to restenosis. Patient-specific pre-operative models of coronary bifurcations were reconstructed from CT angiography and optical coherence tomography (OCT). Plaque location and composition were estimated from OCT and assigned to models, and structural simulations were performed in Abaqus. Artery geometries after virtual stent expansion of Xience Prime or Nobori stents created in SolidWorks were compared to post-operative geometry from OCT and CT before being extracted and used for CFD simulations in SimVascular. Inflow boundary conditions based on body surface area, and downstream vascular resistances and capacitances were applied at branches to mimic physiology. Artery geometries obtained after virtual expansion were in good agreement with those reconstructed from patient images. Quantitative comparison of the distance between reconstructed and post-stent geometries revealed a maximum difference in area of 20.4%. Adverse indices of wall shear stress were more pronounced for thicker Nobori stents in both patients. These findings verify structural analyses of stent expansion, introduce a workflow to combine software packages for solid and fluid mechanics analysis, and underscore important stent design features from prior idealized studies. The proposed approach may ultimately be useful in determining an optimal choice of stent and position for each patient. Copyright © 2015 Elsevier Ltd. All rights reserved.
A computer tool to support in design of industrial Ethernet.
Lugli, Alexandre Baratella; Santos, Max Mauro Dias; Franco, Lucia Regina Horta Rodrigues
2009-04-01
This paper presents a computer tool to support in the project and development of an industrial Ethernet network, verifying the physical layer (cables-resistance and capacitance, scan time, network power supply-POE's concept "Power Over Ethernet" and wireless), and occupation rate (amount of information transmitted to the network versus the controller network scan time). These functions are accomplished without a single physical element installed in the network, using only simulation. The computer tool has a software that presents a detailed vision of the network to the user, besides showing some possible problems in the network, and having an extremely friendly environment.
GPU-based prompt gamma ray imaging from boron neutron capture therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Do-Kun; Jung, Joo-Young; Suk Suh, Tae, E-mail: suhsanta@catholic.ac.kr
Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusions: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.« less
TU-FG-BRB-07: GPU-Based Prompt Gamma Ray Imaging From Boron Neutron Capture Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S; Suh, T; Yoon, D
Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusion: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray reconstruction using the GPU computation for BNCT simulations.« less
NASA Technical Reports Server (NTRS)
Manshadi, F.
1986-01-01
A low-loss bandstop filter designed and developed for the Deep Space Network's 34-meter high-efficiency antennas is described. The filter is used for protection of the X-band traveling wave masers from the 20-kW transmitter signal. A combination of empirical and theoretical techniques was employed as well as computer simulation to verify the design before fabrication.
Kuniansky, E.L.
1990-01-01
A computer program based on the Galerkin finite-element method was developed to simulate two-dimensional steady-state ground-water flow in either isotropic or anisotropic confined aquifers. The program may also be used for unconfined aquifers of constant saturated thickness. Constant head, constant flux, and head-dependent flux boundary conditions can be specified in order to approximate a variety of natural conditions, such as a river or lake boundary, and pumping well. The computer program was developed for the preliminary simulation of ground-water flow in the Edwards-Trinity Regional aquifer system as part of the Regional Aquifer-Systems Analysis Program. Results of the program compare well to analytical solutions and simulations .from published finite-difference models. A concise discussion of the Galerkin method is presented along with a description of the program. Provided in the Supplemental Data section are a listing of the computer program, definitions of selected program variables, and several examples of data input and output used in verifying the accuracy of the program.
Quantum rewinding via phase estimation
NASA Astrophysics Data System (ADS)
Tabia, Gelo Noel
2015-03-01
In cryptography, the notion of a zero-knowledge proof was introduced by Goldwasser, Micali, and Rackoff. An interactive proof system is said to be zero-knowledge if any verifier interacting with an honest prover learns nothing beyond the validity of the statement being proven. With recent advances in quantum information technologies, it has become interesting to ask if classical zero-knowledge proof systems remain secure against adversaries with quantum computers. The standard approach to show the zero-knowledge property involves constructing a simulator for a malicious verifier that can be rewinded to a previous step when the simulation fails. In the quantum setting, the simulator can be described by a quantum circuit that takes an arbitrary quantum state as auxiliary input but rewinding becomes a nontrivial issue. Watrous proposed a quantum rewinding technique in the case where the simulation's success probability is independent of the auxiliary input. Here I present a more general quantum rewinding scheme that employs the quantum phase estimation algorithm. This work was funded by institutional research grant IUT2-1 from the Estonian Research Council and by the European Union through the European Regional Development Fund.
Tree-Structured Digital Organisms Model
NASA Astrophysics Data System (ADS)
Suzuki, Teruhiko; Nobesawa, Shiho; Tahara, Ikuo
Tierra and Avida are well-known models of digital organisms. They describe a life process as a sequence of computation codes. A linear sequence model may not be the only way to describe a digital organism, though it is very simple for a computer-based model. Thus we propose a new digital organism model based on a tree structure, which is rather similar to the generic programming. With our model, a life process is a combination of various functions, as if life in the real world is. This implies that our model can easily describe the hierarchical structure of life, and it can simulate evolutionary computation through mutual interaction of functions. We verified our model by simulations that our model can be regarded as a digital organism model according to its definitions. Our model even succeeded in creating species such as viruses and parasites.
Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil
NASA Technical Reports Server (NTRS)
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2016-01-01
Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.
A suite of exercises for verifying dynamic earthquake rupture codes
Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis
2018-01-01
We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.
Generalized simulation technique for turbojet engine system analysis
NASA Technical Reports Server (NTRS)
Seldner, K.; Mihaloew, J. R.; Blaha, R. J.
1972-01-01
A nonlinear analog simulation of a turbojet engine was developed. The purpose of the study was to establish simulation techniques applicable to propulsion system dynamics and controls research. A schematic model was derived from a physical description of a J85-13 turbojet engine. Basic conservation equations were applied to each component along with their individual performance characteristics to derive a mathematical representation. The simulation was mechanized on an analog computer. The simulation was verified in both steady-state and dynamic modes by comparing analytical results with experimental data obtained from tests performed at the Lewis Research Center with a J85-13 engine. In addition, comparison was also made with performance data obtained from the engine manufacturer. The comparisons established the validity of the simulation technique.
A system for automatic evaluation of simulation software
NASA Technical Reports Server (NTRS)
Ryan, J. P.; Hodges, B. C.
1976-01-01
Within the field of computer software, simulation and verification are complementary processes. Simulation methods can be used to verify software by performing variable range analysis. More general verification procedures, such as those described in this paper, can be implicitly, viewed as attempts at modeling the end-product software. From software requirement methodology, each component of the verification system has some element of simulation to it. Conversely, general verification procedures can be used to analyze simulation software. A dynamic analyzer is described which can be used to obtain properly scaled variables for an analog simulation, which is first digitally simulated. In a similar way, it is thought that the other system components and indeed the whole system itself have the potential of being effectively used in a simulation environment.
NASA Astrophysics Data System (ADS)
Zhuo, Congshan; Zhong, Chengwen
2016-11-01
In this paper, a three-dimensional filter-matrix lattice Boltzmann (FMLB) model based on large eddy simulation (LES) was verified for simulating wall-bounded turbulent flows. The Vreman subgrid-scale model was employed in the present FMLB-LES framework, which had been proved to be capable of predicting turbulent near-wall region accurately. The fully developed turbulent channel flows were performed at a friction Reynolds number Reτ of 180. The turbulence statistics computed from the present FMLB-LES simulations, including mean stream velocity profile, Reynolds stress profile and root-mean-square velocity fluctuations greed well with the LES results of multiple-relaxation-time (MRT) LB model, and some discrepancies in comparison with those direct numerical simulation (DNS) data of Kim et al. was also observed due to the relatively low grid resolution. Moreover, to investigate the influence of grid resolution on the present LES simulation, a DNS simulation on a finer gird was also implemented by present FMLB-D3Q19 model. Comparisons of detailed computed various turbulence statistics with available benchmark data of DNS showed quite well agreement.
NASA Astrophysics Data System (ADS)
Barlow, Steven J.
1986-09-01
The Air Force needs a better method of designing new and retrofit heating, ventilating and air conditioning (HVAC) control systems. Air Force engineers currently use manual design/predict/verify procedures taught at the Air Force Institute of Technology, School of Civil Engineering, HVAC Control Systems course. These existing manual procedures are iterative and time-consuming. The objectives of this research were to: (1) Locate and, if necessary, modify an existing computer-based method for designing and analyzing HVAC control systems that is compatible with the HVAC Control Systems manual procedures, or (2) Develop a new computer-based method of designing and analyzing HVAC control systems that is compatible with the existing manual procedures. Five existing computer packages were investigated in accordance with the first objective: MODSIM (for modular simulation), HVACSIM (for HVAC simulation), TRNSYS (for transient system simulation), BLAST (for building load and system thermodynamics) and Elite Building Energy Analysis Program. None were found to be compatible or adaptable to the existing manual procedures, and consequently, a prototype of a new computer method was developed in accordance with the second research objective.
The GOCE end-to-end system simulator
NASA Astrophysics Data System (ADS)
Catastini, G.; Cesare, S.; de Sanctis, S.; Detoma, E.; Dumontel, M.; Floberghagen, R.; Parisch, M.; Sechi, G.; Anselmi, A.
2003-04-01
The idea of an end-to-end simulator was conceived in the early stages of the GOCE programme, as an essential tool for assessing the satellite system performance, that cannot be fully tested on the ground. The simulator in its present form is under development at Alenia Spazio for ESA since the beginning of Phase B and is being used for checking the consistency of the spacecraft and of the payload specifications with the overall system requirements, supporting trade-off, sensitivity and worst-case analyses, and preparing and testing the on-ground and in-flight calibration concepts. The software simulates the GOCE flight along an orbit resulting from the application of Earth's gravity field, non-conservative environmental disturbances (atmospheric drag, coupling with Earth's magnetic field, etc.) and control forces/torques. The drag free control forces as well as the attitude control torques are generated by the current design of the dedicated algorithms. Realistic sensor models (star tracker, GPS receiver and gravity gradiometer) feed the control algorithms and the commanded forces are applied through realistic thruster models. The output of this stage of the simulator is a time series of Level-0 data, namely the gradiometer raw measurements and spacecraft ancillary data. The next stage of the simulator transforms Level-0 data into Level-1b (gravity gradient tensor) data, by implementing the following steps: - transformation of raw measurements of each pair of accelerometers into common and differential accelerations - calibration of the common and differential accelerations - application of the post-facto algorithm to rectify the phase of the accelerations and to estimate the GOCE angular velocity and attitude - computation of the Level-1b gravity gradient tensor from calibrated accelerations and estimated angular velocity in different reference frames (orbital, inertial, earth-fixed); computation of the spectral density of the error of the tensor diagonal components (measured gravity gradient minus input gravity gradient) in order to verify the requirement on the error of gravity gradient of 4 mE/sqrt(Hz) within the gradiometer measurement bandwidth (5 to 100 mHz); computation of the spectral density of the tensor trace in order to verify the requirement of 4 sqrt(3) mE/sqrt(Hz) within the measurement bandwidth - processing of GPS observations for orbit reconstruction within the required 10m accuracy and for gradiometer measurement geolocation. The current version of the end-to-end simulator, essentially focusing on the gradiometer payload, is undergoing detailed testing based on a time span of 10 days of simulated flight. This testing phase, ending in January 2003, will verify the current implementation and conclude the assessment of numerical stability and precision. Following that, the exercise will be repeated on a longer-duration simulated flight and the lesson learnt so far will be exploited to further improve the simulator's fidelity. The paper will describe the simulator's current status and will illustrate its capabilities for supporting the assessment of the quality of the scientific products resulting from the current spacecraft and payload design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwan, T.J.T.; Moir, D.C.; Snell, C.M.
In high resolution flash x-ray imaging technology the electric field developed between the electron beam and the converter target is large enough to draw ions from the target surface. The ions provide fractional neutralization and cause the electron beam to focus radially inward, and the focal point subsequently moves upstream due to the expansion of the ion column. A self-bias target concept is proposed and verified via computer simulation that the electron charge deposited on the target can generate an electric potential, which can effectively limit the ion motion and thereby stabilize the growth of the spot size. A targetmore » chamber using the self bias target concept was designed and tested in the Integrated Test Stand (ITS). The authors have obtained good agreement between computer simulation and experiment.« less
Response of an all digital phase-locked loop
NASA Technical Reports Server (NTRS)
Garodnick, J.; Greco, J.; Schilling, D. L.
1974-01-01
An all digital phase-locked loop (DPLL) is designed, analyzed, and tested. Three specific configurations are considered, generating first, second, and third order DPLL's; and it is found, using a computer simulation of a noise spike, and verified experimentally, that of these configurations the second-order system is optimum from the standpoint of threshold extension. This substantiates results obtained for analog PLL's.
NASA Astrophysics Data System (ADS)
Yun, Lingtong; Zhao, Hongzhong; Du, Mengyuan
2018-04-01
Quadrature and multi-channel amplitude-phase error have to be compensated in the I/Q quadrature sampling and signal through multi-channel. A new method that it doesn't need filter and standard signal is presented in this paper. And it can combined estimate quadrature and multi-channel amplitude-phase error. The method uses cross-correlation and amplitude ratio between the signal to estimate the two amplitude-phase errors simply and effectively. And the advantages of this method are verified by computer simulation. Finally, the superiority of the method is also verified by measure data of outfield experiments.
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth; Jang, Jiann-Woei; McCants, Edward; Omohundro, Zachary; Ring, Tom; Templeton, Jeremy; Zoss, Jeremy; Wallace, Jonathan; Ziegler, Philip
2011-01-01
Draper Station Analysis Tool (DSAT) is a computer program, built on commercially available software, for simulating and analyzing complex dynamic systems. Heretofore used in designing and verifying guidance, navigation, and control systems of the International Space Station, DSAT has a modular architecture that lends itself to modification for application to spacecraft or terrestrial systems. DSAT consists of user-interface, data-structures, simulation-generation, analysis, plotting, documentation, and help components. DSAT automates the construction of simulations and the process of analysis. DSAT provides a graphical user interface (GUI), plus a Web-enabled interface, similar to the GUI, that enables a remotely located user to gain access to the full capabilities of DSAT via the Internet and Webbrowser software. Data structures are used to define the GUI, the Web-enabled interface, simulations, and analyses. Three data structures define the type of analysis to be performed: closed-loop simulation, frequency response, and/or stability margins. DSAT can be executed on almost any workstation, desktop, or laptop computer. DSAT provides better than an order of magnitude improvement in cost, schedule, and risk assessment for simulation based design and verification of complex dynamic systems.
Kernodle, John Michael
1981-01-01
A two-dimensional ground-water flow model of the Eutaw-McShan and Gordo aquifers in the area of Lee County, Miss., was successfully calibrated and verified using data from six long-term observation wells and two intensive studies of areal water levels. The water levels computed by the model were found to be most sensitive to changes in simulated aquifer hydraulic conductivity and to changes in head in the overlying Coffee Sand aquifer. The two-dimensional model performed reasonably well in simulating the aquifer system except possibly in southern Lee County and southward where a clay bed at the top of the Gordo Formation partially isolated the Gordo from the overlying Eutaw-McShan aquifer. The verified model was used to determine theoretical aquifer response to increased ground-water withdrawal to the year 2000. Two estimated rates of increase and five possible well field locations were examined. (USGS)
Simulation of Local Blood Flow in Human Brain under Altered Gravity
NASA Technical Reports Server (NTRS)
Kim, Chang Sung; Kiris, Cetin; Kwak, Dochan
2003-01-01
In addition to the altered gravitational forces, specific shapes and connections of arteries in the brain vary in the human population (Cebral et al., 2000; Ferrandez et al., 2002). Considering the geometric variations, pulsatile unsteadiness, and moving walls, computational approach in analyzing altered blood circulation will offer an economical alternative to experiments. This paper presents a computational approach for modeling the local blood flow through the human brain under altered gravity. This computational approach has been verified through steady and unsteady experimental measurements and then applied to the unsteady blood flows through a carotid bifurcation model and an idealized Circle of Willis (COW) configuration under altered gravity conditions.
Kinetics of the electric double layer formation modelled by the finite difference method
NASA Astrophysics Data System (ADS)
Valent, Ivan
2017-11-01
Dynamics of the elctric double layer formation in 100 mM NaCl solution for sudden potentail steps of 10 and 20 mV was simulated using the Poisson-Nernst-Planck theory and VLUGR2 solver for partial differential equations. The used approach was verified by comparing the obtained steady-state solution with the available exact solution. The simulations allowed for detailed analysis of the relaxation processes of the individual ions and the electric potential. Some computational aspects of the problem were discussed.
Pressure profiles of the BRing based on the simulation used in the CSRm
NASA Astrophysics Data System (ADS)
Wang, J. C.; Li, P.; Yang, J. C.; Yuan, Y. J.; Wu, B.; Chai, Z.; Luo, C.; Dong, Z. Q.; Zheng, W. H.; Zhao, H.; Ruan, S.; Wang, G.; Liu, J.; Chen, X.; Wang, K. D.; Qin, Z. M.; Yin, B.
2017-07-01
HIAF-BRing, a new multipurpose accelerator facility of the High Intensity heavy-ion Accelerator Facility project, requires an extremely high vacuum lower than 10-11 mbar to fulfill the requirements of radioactive beam physics and high energy density physics. To achieve the required process pressure, the bench-marked codes of VAKTRAK and Molflow+ are used to simulate the pressure profiles of the BRing system. In order to ensure the accuracy of the implementation of VAKTRAK, the computational results are verified by measured pressure data and compared with a new simulation code BOLIDE on the current synchrotron CSRm. Since the verification of VAKTRAK has been done, the pressure profiles of the BRing are calculated with different parameters such as conductance, out-gassing rates and pumping speeds. According to the computational results, the optimal parameters are selected to achieve the required pressure for the BRing.
NASA Technical Reports Server (NTRS)
Glaese, John R.; Tobbe, Patrick A.
1986-01-01
The Space Station Mechanism Test Bed consists of a hydraulically driven, computer controlled six degree of freedom (DOF) motion system with which docking, berthing, and other mechanisms can be evaluated. Measured contact forces and moments are provided to the simulation host computer to enable representation of orbital contact dynamics. This report describes the development of a generalized math model which represents the relative motion between two rigid orbiting vehicles. The model allows motion in six DOF for each body, with no vehicle size limitation. The rotational and translational equations of motion are derived. The method used to transform the forces and moments from the sensor location to the vehicles' centers of mass is also explained. Two math models of docking mechanisms, a simple translational spring and the Remote Manipulator System end effector, are presented along with simulation results. The translational spring model is used in an attempt to verify the simulation with compensated hardware in the loop results.
Analytical solutions for coagulation and condensation kinetics of composite particles
NASA Astrophysics Data System (ADS)
Piskunov, Vladimir N.
2013-04-01
The processes of composite particles formation consisting of a mixture of different materials are essential for many practical problems: for analysis of the consequences of accidental releases in atmosphere; for simulation of precipitation formation in clouds; for description of multi-phase processes in chemical reactors and industrial facilities. Computer codes developed for numerical simulation of these processes require optimization of computational methods and verification of numerical programs. Kinetic equations of composite particle formation are given in this work in a concise form (impurity integrated). Coagulation, condensation and external sources associated with nucleation are taken into account. Analytical solutions were obtained in a number of model cases. The general laws for fraction redistribution of impurities were defined. The results can be applied to develop numerical algorithms considerably reducing the simulation effort, as well as to verify the numerical programs for calculation of the formation kinetics of composite particles in the problems of practical importance.
Park, Hyun June; Park, Kyungmoon; Kim, Yong Hwan; Yoo, Young Je
2014-12-20
Candida antarctica lipase B (CalB) is one of the most useful enzyme for various reactions and bioconversions. Enhancing thermostability of CalB is required for industrial applications. In this study, we propose a computational design strategy to improve the thermostability of CalB. Molecular dynamics simulations at various temperatures were used to investigate the common fluctuation sites in CalB, which are considered to be thermally weak points. The RosettaDesign algorithm was used to design the selected residues. The redesigned CalB was simulated to verify both the enhancement of intramolecular interactions and the lowering of the overall root-mean-square deviation (RMSD) values. The A251E mutant designed using this strategy showed a 2.5-fold higher thermostability than the wild-type CalB. This strategy could apply to other industry applicable enzymes. Copyright © 2014 Elsevier B.V. All rights reserved.
Optimization analysis of thermal management system for electric vehicle battery pack
NASA Astrophysics Data System (ADS)
Gong, Huiqi; Zheng, Minxin; Jin, Peng; Feng, Dong
2018-04-01
Electric vehicle battery pack can increase the temperature to affect the power battery system cycle life, charge-ability, power, energy, security and reliability. The Computational Fluid Dynamics simulation and experiment of the charging and discharging process of the battery pack were carried out for the thermal management system of the battery pack under the continuous charging of the battery. The simulation result and the experimental data were used to verify the rationality of the Computational Fluid Dynamics calculation model. In view of the large temperature difference of the battery module in high temperature environment, three optimization methods of the existing thermal management system of the battery pack were put forward: adjusting the installation position of the fan, optimizing the arrangement of the battery pack and reducing the fan opening temperature threshold. The feasibility of the optimization method is proved by simulation and experiment of the thermal management system of the optimized battery pack.
Data mining through simulation.
Lytton, William W; Stewart, Mark
2007-01-01
Data integration is particularly difficult in neuroscience; we must organize vast amounts of data around only a few fragmentary functional hypotheses. It has often been noted that computer simulation, by providing explicit hypotheses for a particular system and bridging across different levels of organization, can provide an organizational focus, which can be leveraged to form substantive hypotheses. Simulations lend meaning to data and can be updated and adapted as further data come in. The use of simulation in this context suggests the need for simulator adjuncts to manage and evaluate data. We have developed a neural query system (NQS) within the NEURON simulator, providing a relational database system, a query function, and basic data-mining tools. NQS is used within the simulation context to manage, verify, and evaluate model parameterizations. More importantly, it is used for data mining of simulation data and comparison with neurophysiology.
Hafnium transistor process design for neural interfacing.
Parent, David W; Basham, Eric J
2009-01-01
A design methodology is presented that uses 1-D process simulations of Metal Insulator Semiconductor (MIS) structures to design the threshold voltage of hafnium oxide based transistors used for neural recording. The methodology is comprised of 1-D analytical equations for threshold voltage specification, and doping profiles, and 1-D MIS Technical Computer Aided Design (TCAD) to design a process to implement a specific threshold voltage, which minimized simulation time. The process was then verified with a 2-D process/electrical TCAD simulation. Hafnium oxide films (HfO) were grown and characterized for dielectric constant and fixed oxide charge for various annealing temperatures, two important design variables in threshold voltage design.
Towards constructing multi-bit binary adder based on Belousov-Zhabotinsky reaction
NASA Astrophysics Data System (ADS)
Zhang, Guo-Mao; Wong, Ieong; Chou, Meng-Ta; Zhao, Xin
2012-04-01
It has been proposed that the spatial excitable media can perform a wide range of computational operations, from image processing, to path planning, to logical and arithmetic computations. The realizations in the field of chemical logical and arithmetic computations are mainly concerned with single simple logical functions in experiments. In this study, based on Belousov-Zhabotinsky reaction, we performed simulations toward the realization of a more complex operation, the binary adder. Combining with some of the existing functional structures that have been verified experimentally, we designed a planar geometrical binary adder chemical device. Through numerical simulations, we first demonstrated that the device can implement the function of a single-bit full binary adder. Then we show that the binary adder units can be further extended in plane, and coupled together to realize a two-bit, or even multi-bit binary adder. The realization of chemical adders can guide the constructions of other sophisticated arithmetic functions, ultimately leading to the implementation of chemical computer and other intelligent systems.
NASA Astrophysics Data System (ADS)
Saghafian, Amirreza; Pitsch, Heinz
2012-11-01
A compressible flamelet/progress variable approach (CFPV) has been devised for high-speed flows. Temperature is computed from the transported total energy and tabulated species mass fractions and the source term of the progress variable is rescaled with pressure and temperature. The combustion is thus modeled by three additional scalar equations and a chemistry table that is computed in a pre-processing step. Three-dimensional direct numerical simulation (DNS) databases of reacting supersonic turbulent mixing layer with detailed chemistry are analyzed to assess the underlying assumptions of CFPV. Large eddy simulations (LES) of the same configuration using the CFPV method have been performed and compared with the DNS results. The LES computations are based on the presumed subgrid PDFs of mixture fraction and progress variable, beta function and delta function respectively, which are assessed using DNS databases. The flamelet equation budget is also computed to verify the validity of CFPV method for high-speed flows.
Fick, Lambert H.; Merzari, Elia; Hassan, Yassin A.
2017-02-20
Computational analyses of fluid flow through packed pebble bed domains using the Reynolds-averaged NavierStokes framework have had limited success in the past. Because of a lack of high-fidelity experimental or computational data, optimization of Reynolds-averaged closure models for these geometries has not been extensively developed. In the present study, direct numerical simulation was employed to develop a high-fidelity database that can be used for optimizing Reynolds-averaged closure models for pebble bed flows. A face-centered cubic domain with periodic boundaries was used. Flow was simulated at a Reynolds number of 9308 and cross-verified by using available quasi-DNS data. During the simulations,more » low-frequency instability modes were observed that affected the stationary solution. Furthermore, these instabilities were investigated by using the method of proper orthogonal decomposition, and a correlation was found between the time-dependent asymmetry of the averaged velocity profile data and the behavior of the highest energy eigenmodes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fick, Lambert H.; Merzari, Elia; Hassan, Yassin A.
Computational analyses of fluid flow through packed pebble bed domains using the Reynolds-averaged NavierStokes framework have had limited success in the past. Because of a lack of high-fidelity experimental or computational data, optimization of Reynolds-averaged closure models for these geometries has not been extensively developed. In the present study, direct numerical simulation was employed to develop a high-fidelity database that can be used for optimizing Reynolds-averaged closure models for pebble bed flows. A face-centered cubic domain with periodic boundaries was used. Flow was simulated at a Reynolds number of 9308 and cross-verified by using available quasi-DNS data. During the simulations,more » low-frequency instability modes were observed that affected the stationary solution. Furthermore, these instabilities were investigated by using the method of proper orthogonal decomposition, and a correlation was found between the time-dependent asymmetry of the averaged velocity profile data and the behavior of the highest energy eigenmodes.« less
ModeLang: a new approach for experts-friendly viral infections modeling.
Wasik, Szymon; Prejzendanc, Tomasz; Blazewicz, Jacek
2013-01-01
Computational modeling is an important element of systems biology. One of its important applications is modeling complex, dynamical, and biological systems, including viral infections. This type of modeling usually requires close cooperation between biologists and mathematicians. However, such cooperation often faces communication problems because biologists do not have sufficient knowledge to understand mathematical description of the models, and mathematicians do not have sufficient knowledge to define and verify these models. In many areas of systems biology, this problem has already been solved; however, in some of these areas there are still certain problematic aspects. The goal of the presented research was to facilitate this cooperation by designing seminatural formal language for describing viral infection models that will be easy to understand for biologists and easy to use by mathematicians and computer scientists. The ModeLang language was designed in cooperation with biologists and its computer implementation was prepared. Tests proved that it can be successfully used to describe commonly used viral infection models and then to simulate and verify them. As a result, it can make cooperation between biologists and mathematicians modeling viral infections much easier, speeding up computational verification of formulated hypotheses.
ModeLang: A New Approach for Experts-Friendly Viral Infections Modeling
Blazewicz, Jacek
2013-01-01
Computational modeling is an important element of systems biology. One of its important applications is modeling complex, dynamical, and biological systems, including viral infections. This type of modeling usually requires close cooperation between biologists and mathematicians. However, such cooperation often faces communication problems because biologists do not have sufficient knowledge to understand mathematical description of the models, and mathematicians do not have sufficient knowledge to define and verify these models. In many areas of systems biology, this problem has already been solved; however, in some of these areas there are still certain problematic aspects. The goal of the presented research was to facilitate this cooperation by designing seminatural formal language for describing viral infection models that will be easy to understand for biologists and easy to use by mathematicians and computer scientists. The ModeLang language was designed in cooperation with biologists and its computer implementation was prepared. Tests proved that it can be successfully used to describe commonly used viral infection models and then to simulate and verify them. As a result, it can make cooperation between biologists and mathematicians modeling viral infections much easier, speeding up computational verification of formulated hypotheses. PMID:24454531
NASA Technical Reports Server (NTRS)
Schroeder, Lyle C.; Bailey, M. C.; Mitchell, John L.
1992-01-01
Methods for increasing the electromagnetic (EM) performance of reflectors with rough surfaces were tested and evaluated. First, one quadrant of the 15-meter hoop-column antenna was retrofitted with computer-driven and controlled motors to allow automated adjustment of the reflector surface. The surface errors, measured with metric photogrammetry, were used in a previously verified computer code to calculate control motor adjustments. With this system, a rough antenna surface (rms of approximately 0.180 inch) was corrected in two iterations to approximately the structural surface smoothness limit of 0.060 inch rms. The antenna pattern and gain improved significantly as a result of these surface adjustments. The EM performance was evaluated with a computer program for distorted reflector antennas which had been previously verified with experimental data. Next, the effects of the surface distortions were compensated for in computer simulations by superimposing excitation from an array feed to maximize antenna performance relative to an undistorted reflector. Results showed that a 61-element array could produce EM performance improvements equal to surface adjustments. When both mechanical surface adjustment and feed compensation techniques were applied, the equivalent operating frequency increased from approximately 6 to 18 GHz.
Algorithms and architecture for multiprocessor based circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, J.T.
Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less
Atmospheric cloud physics thermal systems analysis
NASA Technical Reports Server (NTRS)
1977-01-01
Engineering analyses performed on the Atmospheric Cloud Physics (ACPL) Science Simulator expansion chamber and associated thermal control/conditioning system are reported. Analyses were made to develop a verified thermal model and to perform parametric thermal investigations to evaluate systems performance characteristics. Thermal network representations of solid components and the complete fluid conditioning system were solved simultaneously using the Systems Improved Numerical Differencing Analyzer (SINDA) computer program.
On the effects of phase jitter on QPSK lock detection
NASA Technical Reports Server (NTRS)
Mileant, A.; Hinedi, S.
1993-01-01
The performance of a QPSK (quadrature phase-shift keying) lock detector is described, taking into account the degradation due to carrier phase jitter. Such an analysis is necessary for accurate performance prediction purposes in scenarios where both the loop SNR is low and the estimation period is short. The derived formulas are applicable to several QPSK loops and are verified using computer simulations.
Gupta, Jasmine; Nunes, Cletus; Vyas, Shyam; Jonnalagadda, Sriramakamal
2011-03-10
The objectives of this study were (i) to develop a computational model based on molecular dynamics technique to predict the miscibility of indomethacin in carriers (polyethylene oxide, glucose, and sucrose) and (ii) to experimentally verify the in silico predictions by characterizing the drug-carrier mixtures using thermoanalytical techniques. Molecular dynamics (MD) simulations were performed using the COMPASS force field, and the cohesive energy density and the solubility parameters were determined for the model compounds. The magnitude of difference in the solubility parameters of drug and carrier is indicative of their miscibility. The MD simulations predicted indomethacin to be miscible with polyethylene oxide and to be borderline miscible with sucrose and immiscible with glucose. The solubility parameter values obtained using the MD simulations values were in reasonable agreement with those calculated using group contribution methods. Differential scanning calorimetry showed melting point depression of polyethylene oxide with increasing levels of indomethacin accompanied by peak broadening, confirming miscibility. In contrast, thermal analysis of blends of indomethacin with sucrose and glucose verified general immiscibility. The findings demonstrate that molecular modeling is a powerful technique for determining the solubility parameters and predicting miscibility of pharmaceutical compounds. © 2011 American Chemical Society
Unsteady flow simulations around complex geometries using stationary or rotating unstructured grids
NASA Astrophysics Data System (ADS)
Sezer-Uzol, Nilay
In this research, the computational analysis of three-dimensional, unsteady, separated, vortical flows around complex geometries is studied by using stationary or moving unstructured grids. Two main engineering problems are investigated. The first problem is the unsteady simulation of a ship airwake, where helicopter operations become even more challenging, by using stationary unstructured grids. The second problem is the unsteady simulation of wind turbine rotor flow fields by using moving unstructured grids which are rotating with the whole three-dimensional rigid rotor geometry. The three dimensional, unsteady, parallel, unstructured, finite volume flow solver, PUMA2, is used for the computational fluid dynamics (CFD) simulations considered in this research. The code is modified to have a moving grid capability to perform three-dimensional, time-dependent rotor simulations. An instantaneous log-law wall model for Large Eddy Simulations is also implemented in PUMA2 to investigate the very large Reynolds number flow fields of rotating blades. To verify the code modifications, several sample test cases are also considered. In addition, interdisciplinary studies, which are aiming to provide new tools and insights to the aerospace and wind energy scientific communities, are done during this research by focusing on the coupling of ship airwake CFD simulations with the helicopter flight dynamics and control analysis, the coupling of wind turbine rotor CFD simulations with the aeroacoustic analysis, and the analysis of these time-dependent and large-scale CFD simulations with the help of a computational monitoring, steering and visualization tool, POSSE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Michael K.; Davidson, Megan
As part of Sandia’s nuclear deterrence mission, the B61-12 Life Extension Program (LEP) aims to modernize the aging weapon system. Modernization requires requalification and Sandia is using high performance computing to perform advanced computational simulations to better understand, evaluate, and verify weapon system performance in conjunction with limited physical testing. The Nose Bomb Subassembly (NBSA) of the B61-12 is responsible for producing a fuzing signal upon ground impact. The fuzing signal is dependent upon electromechanical impact sensors producing valid electrical fuzing signals at impact. Computer generated models were used to assess the timing between the impact sensor’s response to themore » deceleration of impact and damage to major components and system subassemblies. The modeling and simulation team worked alongside the physical test team to design a large-scale reverse ballistic test to not only assess system performance, but to also validate their computational models. The reverse ballistic test conducted at Sandia’s sled test facility sent a rocket sled with a representative target into a stationary B61-12 (NBSA) to characterize the nose crush and functional response of NBSA components. Data obtained from data recorders and high-speed photometrics were integrated with previously generated computer models in order to refine and validate the model’s ability to reliably simulate real-world effects. Large-scale tests are impractical to conduct for every single impact scenario. By creating reliable computer models, we can perform simulations that identify trends and produce estimates of outcomes over the entire range of required impact conditions. Sandia’s HPCs enable geometric resolution that was unachievable before, allowing for more fidelity and detail, and creating simulations that can provide insight to support evaluation of requirements and performance margins. As computing resources continue to improve, researchers at Sandia are hoping to improve these simulations so they provide increasingly credible analysis of the system response and performance over the full range of conditions.« less
Provable classically intractable sampling with measurement-based computation in constant time
NASA Astrophysics Data System (ADS)
Sanders, Stephen; Miller, Jacob; Miyake, Akimasa
We present a constant-time measurement-based quantum computation (MQC) protocol to perform a classically intractable sampling problem. We sample from the output probability distribution of a subclass of the instantaneous quantum polynomial time circuits introduced by Bremner, Montanaro and Shepherd. In contrast with the usual circuit model, our MQC implementation includes additional randomness due to byproduct operators associated with the computation. Despite this additional randomness we show that our sampling task cannot be efficiently simulated by a classical computer. We extend previous results to verify the quantum supremacy of our sampling protocol efficiently using only single-qubit Pauli measurements. Center for Quantum Information and Control, Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131, USA.
A dynamic motion simulator for future European docking systems
NASA Technical Reports Server (NTRS)
Brondino, G.; Marchal, PH.; Grimbert, D.; Noirault, P.
1990-01-01
Europe's first confrontation with docking in space will require extensive testing to verify design and performance and to qualify hardware. For this purpose, a Docking Dynamics Test Facility (DDTF) was developed. It allows reproduction on the ground of the same impact loads and relative motion dynamics which would occur in space during docking. It uses a 9 degree of freedom, servo-motion system, controlled by a real time computer, which simulates the docking spacecraft in a zero-g environment. The test technique involves and active loop based on six axis force and torque detection, a mathematical simulation of individual spacecraft dynamics, and a 9 degree of freedom servomotion of which 3 DOFs allow extension of the kinematic range to 5 m. The configuration was checked out by closed loop tests involving spacecraft control models and real sensor hardware. The test facility at present has an extensive configuration that allows evaluation of both proximity control and docking systems. It provides a versatile tool to verify system design, hardware items and performance capabilities in the ongoing HERMES and COLUMBUS programs. The test system is described and its capabilities are summarized.
Baldwin, Mark A; Clary, Chadd; Maletsky, Lorin P; Rullkoetter, Paul J
2009-10-16
Verified computational models represent an efficient method for studying the relationship between articular geometry, soft-tissue constraint, and patellofemoral (PF) mechanics. The current study was performed to evaluate an explicit finite element (FE) modeling approach for predicting PF kinematics in the natural and implanted knee. Experimental three-dimensional kinematic data were collected on four healthy cadaver specimens in their natural state and after total knee replacement in the Kansas knee simulator during a simulated deep knee bend activity. Specimen-specific FE models were created from medical images and CAD implant geometry, and included soft-tissue structures representing medial-lateral PF ligaments and the quadriceps tendon. Measured quadriceps loads and prescribed tibiofemoral kinematics were used to predict dynamic kinematics of an isolated PF joint between 10 degrees and 110 degrees femoral flexion. Model sensitivity analyses were performed to determine the effect of rigid or deformable patellar representations and perturbed PF ligament mechanical properties (pre-tension and stiffness) on model predictions and computational efficiency. Predicted PF kinematics from the deformable analyses showed average root mean square (RMS) differences for the natural and implanted states of less than 3.1 degrees and 1.7 mm for all rotations and translations. Kinematic predictions with rigid bodies increased average RMS values slightly to 3.7 degrees and 1.9 mm with a five-fold decrease in computational time. Two-fold increases and decreases in PF ligament initial strain and linear stiffness were found to most adversely affect kinematic predictions for flexion, internal-external tilt and inferior-superior translation in both natural and implanted states. The verified models could be used to further investigate the effects of component alignment or soft-tissue variability on natural and implant PF mechanics.
NASA Astrophysics Data System (ADS)
Lin, Mingpei; Xu, Ming; Fu, Xiaoyu
2017-05-01
Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.
The mathematics of virus shell assembly. Progress report 1995--1996
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berger, B.
1996-08-01
This research focuses on applying computational and mathematical techniques to problems in biology, and more specifically to problems in protein folding. Significant progress has been made in the following areas relating to virus shell assembly: the local rules theory has been further developed; development has begun on a second-generation simulator which provides a more physically realistic model of assembly, collaborative efforts have continued with an experimental biologist to verify and inspire the local rules theory; an investigation has been initiated into the mechanics of virus shell assembly; laboratory experiments have been conducted on bacteriophage T4 which verify that the previouslymore » believed structure for the core may be incorrect.« less
Nonvolatile “AND,” “OR,” and “NOT” Boolean logic gates based on phase-change memory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y.; Zhong, Y. P.; Deng, Y. F.
2013-12-21
Electronic devices or circuits that can implement both logic and memory functions are regarded as the building blocks for future massive parallel computing beyond von Neumann architecture. Here we proposed phase-change memory (PCM)-based nonvolatile logic gates capable of AND, OR, and NOT Boolean logic operations verified in SPICE simulations and circuit experiments. The logic operations are parallel computing and results can be stored directly in the states of the logic gates, facilitating the combination of computing and memory in the same circuit. These results are encouraging for ultralow-power and high-speed nonvolatile logic circuit design based on novel memory devices.
NASA Astrophysics Data System (ADS)
Xiao, Dan; Li, Xiaowei; Liu, Su-Juan; Wang, Qiong-Hua
2018-03-01
In this paper, a new scheme of multiple-image encryption and display based on computer-generated holography (CGH) and maximum length cellular automata (MLCA) is presented. With the scheme, the computer-generated hologram, which has the information of the three primitive images, is generated by modified Gerchberg-Saxton (GS) iterative algorithm using three different fractional orders in fractional Fourier domain firstly. Then the hologram is encrypted using MLCA mask. The ciphertext can be decrypted combined with the fractional orders and the rules of MLCA. Numerical simulations and experimental display results have been carried out to verify the validity and feasibility of the proposed scheme.
Fast Decentralized Averaging via Multi-scale Gossip
NASA Astrophysics Data System (ADS)
Tsianos, Konstantinos I.; Rabbat, Michael G.
We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.
Full-color large-scaled computer-generated holograms using RGB color filters.
Tsuchiyama, Yasuhiro; Matsushima, Kyoji
2017-02-06
A technique using RGB color filters is proposed for creating high-quality full-color computer-generated holograms (CGHs). The fringe of these CGHs is composed of more than a billion pixels. The CGHs reconstruct full-parallax three-dimensional color images with a deep sensation of depth caused by natural motion parallax. The simulation technique as well as the principle and challenges of high-quality full-color reconstruction are presented to address the design of filter properties suitable for large-scaled CGHs. Optical reconstructions of actual fabricated full-color CGHs are demonstrated in order to verify the proposed techniques.
A General Simulation Method for Multiple Bodies in Proximate Flight
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
2003-01-01
Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.
Pilot self-coding applied in optical OFDM systems
NASA Astrophysics Data System (ADS)
Li, Changping; Yi, Ying; Lee, Kyesan
2015-04-01
This paper studies the frequency offset correction technique which can be applied in optical OFDM systems. Through theoretical analysis and computer simulations, we can observe that our proposed scheme named pilot self-coding (PSC) has a distinct influence for rectifying the frequency offset, which could mitigate the OFDM performance deterioration because of inter-carrier interference and common phase error. The main approach is to assign a pilot subcarrier before data subcarriers and copy this subcarrier sequence to the symmetric side. The simulation results verify that our proposed PSC is indeed effective against the high degree of frequency offset.
NASA Astrophysics Data System (ADS)
Kim, Do-Bin; Kwon, Dae Woong; Kim, Seunghyun; Lee, Sang-Ho; Park, Byung-Gook
2018-02-01
To obtain high channel boosting potential and reduce a program disturbance in channel stacked NAND flash memory with layer selection by multilevel (LSM) operation, a new program scheme using boosted common source line (CSL) is proposed. The proposed scheme can be achieved by applying proper bias to each layer through its own CSL. Technology computer-aided design (TCAD) simulations are performed to verify the validity of the new method in LSM. Through TCAD simulation, it is revealed that the program disturbance characteristics is effectively improved by the proposed scheme.
NASA Technical Reports Server (NTRS)
Lightsey, W. D.; Alhorn, D. C.; Polites, M. E.
1992-01-01
An experiment designed to test the feasibility of using rotating unbalanced-mass (RUM) devices for line and raster scanning gimbaled payloads, while expending very little power is described. The experiment is configured for ground-based testing, but the scan concept is applicable to ground-based, balloon-borne, and space-based payloads, as well as free-flying spacecraft. The servos used in scanning are defined; the electronic hardware is specified; and a computer simulation model of the system is described. Simulation results are presented that predict system performance and verify the servo designs.
Particle-based membrane model for mesoscopic simulation of cellular dynamics
NASA Astrophysics Data System (ADS)
Sadeghi, Mohsen; Weikl, Thomas R.; Noé, Frank
2018-01-01
We present a simple and computationally efficient coarse-grained and solvent-free model for simulating lipid bilayer membranes. In order to be used in concert with particle-based reaction-diffusion simulations, the model is purely based on interacting and reacting particles, each representing a coarse patch of a lipid monolayer. Particle interactions include nearest-neighbor bond-stretching and angle-bending and are parameterized so as to reproduce the local membrane mechanics given by the Helfrich energy density over a range of relevant curvatures. In-plane fluidity is implemented with Monte Carlo bond-flipping moves. The physical accuracy of the model is verified by five tests: (i) Power spectrum analysis of equilibrium thermal undulations is used to verify that the particle-based representation correctly captures the dynamics predicted by the continuum model of fluid membranes. (ii) It is verified that the input bending stiffness, against which the potential parameters are optimized, is accurately recovered. (iii) Isothermal area compressibility modulus of the membrane is calculated and is shown to be tunable to reproduce available values for different lipid bilayers, independent of the bending rigidity. (iv) Simulation of two-dimensional shear flow under a gravity force is employed to measure the effective in-plane viscosity of the membrane model and show the possibility of modeling membranes with specified viscosities. (v) Interaction of the bilayer membrane with a spherical nanoparticle is modeled as a test case for large membrane deformations and budding involved in cellular processes such as endocytosis. The results are shown to coincide well with the predicted behavior of continuum models, and the membrane model successfully mimics the expected budding behavior. We expect our model to be of high practical usability for ultra coarse-grained molecular dynamics or particle-based reaction-diffusion simulations of biological systems.
TEM verification of the <111>-type 4-arm multi-junction in [001]-Mo single crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsiung, L
2005-03-14
To investigate and verify the formation of <111>-type 4-arm multi-junction by the dislocation reaction of 1/2[111] [b1] + 1/2[{bar 1}1{bar 1}] [b2] + 1/2[{bar 1}{bar 1}1] [b3] = 1/2[{bar 1}11] [b4], which has recently been discovered through computer simulations conducted by Vasily Bulatov and his colleagues.
NASA Technical Reports Server (NTRS)
Nguyen, Louis H.; Ramakrishnan, Jayant; Granda, Jose J.
2006-01-01
The assembly and operation of the International Space Station (ISS) require extensive testing and engineering analysis to verify that the Space Station system of systems would work together without any adverse interactions. Since the dynamic behavior of an entire Space Station cannot be tested on earth, math models of the Space Station structures and mechanical systems have to be built and integrated in computer simulations and analysis tools to analyze and predict what will happen in space. The ISS Centrifuge Rotor (CR) is one of many mechanical systems that need to be modeled and analyzed to verify the ISS integrated system performance on-orbit. This study investigates using Bond Graph modeling techniques as quick and simplified ways to generate models of the ISS Centrifuge Rotor. This paper outlines the steps used to generate simple and more complex models of the CR using Bond Graph Computer Aided Modeling Program with Graphical Input (CAMP-G). Comparisons of the Bond Graph CR models with those derived from Euler-Lagrange equations in MATLAB and those developed using multibody dynamic simulation at the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) are presented to demonstrate the usefulness of the Bond Graph modeling approach for aeronautics and space applications.
NASA Technical Reports Server (NTRS)
Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je
2010-01-01
The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included.
Resin Film Infusion (RFI) Process Modeling for Large Transport Aircraft Wing Structures
NASA Technical Reports Server (NTRS)
Loos, Alfred C.; Caba, Aaron C.; Furrow, Keith W.
2000-01-01
This investigation completed the verification of a three-dimensional resin transfer molding/resin film infusion (RTM/RFI) process simulation model. The model incorporates resin flow through an anisotropic carbon fiber preform, cure kinetics of the resin, and heat transfer within the preform/tool assembly. The computer model can predict the flow front location, resin pressure distribution, and thermal profiles in the modeled part. The formulation for the flow model is given using the finite element/control volume (FE/CV) technique based on Darcy's Law of creeping flow through a porous media. The FE/CV technique is a numerically efficient method for finding the flow front location and the fluid pressure. The heat transfer model is based on the three-dimensional, transient heat conduction equation, including heat generation. Boundary conditions include specified temperature and convection. The code was designed with a modular approach so the flow and/or the thermal module may be turned on or off as desired. Both models are solved sequentially in a quasi-steady state fashion. A mesh refinement study was completed on a one-element thick model to determine the recommended size of elements that would result in a converged model for a typical RFI analysis. Guidelines are established for checking the convergence of a model, and the recommended element sizes are listed. Several experiments were conducted and computer simulations of the experiments were run to verify the simulation model. Isothermal, non-reacting flow in a T-stiffened section was simulated to verify the flow module. Predicted infiltration times were within 12-20% of measured times. The predicted pressures were approximately 50% of the measured pressures. A study was performed to attempt to explain the difference in pressures. Non-isothermal experiments with a reactive resin were modeled to verify the thermal module and the resin model. Two panels were manufactured using the RFI process. One was a stepped panel and the other was a panel with two 'T' stiffeners. The difference between the predicted infiltration times and the experimental times was 4% to 23%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Wei; Sevilla, Thomas Alonso; Zuo, Wangda
Historically, multizone models are widely used in building airflow and energy performance simulations due to their fast computing speed. However, multizone models assume that the air in a room is well mixed, consequently limiting their application. In specific rooms where this assumption fails, the use of computational fluid dynamics (CFD) models may be an alternative option. Previous research has mainly focused on coupling CFD models and multizone models to study airflow in large spaces. While significant, most of these analyses did not consider the coupled simulation of the building airflow with the building's Heating, Ventilation, and Air-Conditioning (HVAC) systems. Thismore » paper tries to fill the gap by integrating the models for HVAC systems with coupled multizone and CFD simulations for airflows, using the Modelica simul ation platform. To improve the computational efficiency, we incorporated a simplified CFD model named fast fluid dynamics (FFD). We first introduce the data synchronization strategy and implementation in Modelica. Then, we verify the implementation using two case studies involving an isothermal and a non-isothermal flow by comparing model simulations to experiment data. Afterward, we study another three cases that are deemed more realistic. This is done by attaching a variable air volume (VAV) terminal box and a VAV system to previous flows to assess the capability of the models in studying the dynamic control of HVAC systems. Finally, we discuss further research needs on the coupled simulation using the models.« less
Foundations for computer simulation of a low pressure oil flooded single screw air compressor
NASA Astrophysics Data System (ADS)
Bein, T. W.
1981-12-01
The necessary logic to construct a computer model to predict the performance of an oil flooded, single screw air compressor is developed. The geometric variables and relationships used to describe the general single screw mechanism are developed. The governing equations to describe the processes are developed from their primary relationships. The assumptions used in the development are also defined and justified. The computer model predicts the internal pressure, temperature, and flowrates through the leakage paths throughout the compression cycle of the single screw compressor. The model uses empirical external values as the basis for the internal predictions. The computer values are compared to the empirical values, and conclusions are drawn based on the results. Recommendations are made for future efforts to improve the computer model and to verify some of the conclusions that are drawn.
Explicitly represented polygon wall boundary model for the explicit MPS method
NASA Astrophysics Data System (ADS)
Mitsume, Naoto; Yoshimura, Shinobu; Murotani, Kohei; Yamada, Tomonori
2015-05-01
This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.
Discrete Adjoint-Based Design Optimization of Unsteady Turbulent Flows on Dynamic Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Diskin, Boris; Yamaleev, Nail K.
2009-01-01
An adjoint-based methodology for design optimization of unsteady turbulent flows on dynamic unstructured grids is described. The implementation relies on an existing unsteady three-dimensional unstructured grid solver capable of dynamic mesh simulations and discrete adjoint capabilities previously developed for steady flows. The discrete equations for the primal and adjoint systems are presented for the backward-difference family of time-integration schemes on both static and dynamic grids. The consistency of sensitivity derivatives is established via comparisons with complex-variable computations. The current work is believed to be the first verified implementation of an adjoint-based optimization methodology for the true time-dependent formulation of the Navier-Stokes equations in a practical computational code. Large-scale shape optimizations are demonstrated for turbulent flows over a tiltrotor geometry and a simulated aeroelastic motion of a fighter jet.
Simplified energy-balance model for pragmatic multi-dimensional device simulation
NASA Astrophysics Data System (ADS)
Chang, Duckhyun; Fossum, Jerry G.
1997-11-01
To pragmatically account for non-local carrier heating and hot-carrier effects such as velocity overshoot and impact ionization in multi-dimensional numerical device simulation, a new simplified energy-balance (SEB) model is developed and implemented in FLOODS[16] as a pragmatic option. In the SEB model, the energy-relaxation length is estimated from a pre-process drift-diffusion simulation using the carrier-velocity distribution predicted throughout the device domain, and is used without change in a subsequent simpler hydrodynamic (SHD) simulation. The new SEB model was verified by comparison of two-dimensional SHD and full HD DC simulations of a submicron MOSFET. The SHD simulations yield detailed distributions of carrier temperature, carrier velocity, and impact-ionization rate, which agree well with the full HD simulation results obtained with FLOODS. The most noteworthy feature of the new SEB/SHD model is its computational efficiency, which results from reduced Newton iteration counts caused by the enhanced linearity. Relative to full HD, SHD simulation times can be shorter by as much as an order of magnitude since larger voltage steps for DC sweeps and larger time steps for transient simulations can be used. The improved computational efficiency can enable pragmatic three-dimensional SHD device simulation as well, for which the SEB implementation would be straightforward as it is in FLOODS or any robust HD simulator.
A flow-simulation model of the tidal Potomac River
Schaffranek, Raymond W.
1987-01-01
A one-dimensional model capable of simulating flow in a network of interconnected channels has been applied to the tidal Potomac River including its major tributaries and embayments between Washington, D.C., and Indian Head, Md. The model can be used to compute water-surface elevations and flow discharges at any of 66 predetermined locations or at any alternative river cross sections definable within the network of channels. In addition, the model can be used to provide tidal-interchange flow volumes and to evaluate tidal excursions and the flushing properties of the riverine system. Comparisons of model-computed results with measured watersurface elevations and discharges demonstrate the validity and accuracy of the model. Tidal-cycle flow volumes computed by the calibrated model have been verified to be within an accuracy of ? 10 percent. Quantitative characteristics of the hydrodynamics of the tidal river are identified and discussed. The comprehensive flow data provided by the model can be used to better understand the geochemical, biological, and other processes affecting the river's water quality.
Computer-aided auscultation learning system for nursing technique instruction.
Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih
2008-01-01
Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.
Computational Modeling of Meteor-Generated Ground Pressure Signatures
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.; Brown, Peter G.
2017-01-01
We present a thorough validation of a computational approach to predict infrasonic signatures of centimeter-sized meteoroids. We assume that the energy deposition along the meteor trail is dominated by atmospheric drag and simulate the steady, inviscid flow of air in thermochemical equilibrium to compute the meteoroid's near-body pressure signature. This signature is then propagated through a stratified and windy atmosphere to the ground using a methodology adapted from aircraft sonic-boom analysis. An assessment of the numerical accuracy of the near field and the far field solver is presented. The results show that when the source of the signature is the cylindrical Mach-cone, the simulations closely match the observations. The prediction of the shock rise-time, the zero-peak amplitude of the waveform, and the duration of the positive pressure phase are consistently within 10% of the measurements. Uncertainty in the shape of the meteoroid results in a poorer prediction of the trailing part of the waveform. Overall, our results independently verify energy deposition estimates deduced from optical observations.
Computational Models of the Cardiovascular System and Its Response to Microgravity
NASA Technical Reports Server (NTRS)
Kamm, Roger D.
1999-01-01
Computational models of the cardiovascular system are powerful adjuncts to ground-based and in-flight experiments. We will provide NSBRI with a model capable of simulating the short-term effects of gravity on cardiovascular function. The model from this project will: (1) provide a rational framework which quantitatively defines interactions among complex cardiovascular parameters and which supports the critical interpretation of experimental results and testing of hypotheses. (2) permit predictions of the impact of specific countermeasures in the context of various hypothetical cardiovascular abnormalities induced by microgravity. Major progress has been made during the first 18 months of the program: (1) We have developed an operational first-order computer model capable of simulating the cardiovascular response to orthostatic stress. The model consists of a lumped parameter hemodynamic model and a complete reflex control system. The latter includes cardiopulmonary and carotid sinus reflex limbs and interactions between the two. (2) We have modeled the physiologic stress of tilt table experiments and lower body negative pressure procedures (LBNP). We have verified our model's predictions by comparing them with experimental findings from the literature. (3) We have established collaborative efforts with leading investigators interested in experimental studies of orthostatic intolerance, cardiovascular control, and physiologic responses to space flight. (4) We have established a standardized method of transferring data to our laboratory from the ongoing NSBRI bedrest studies. We use this data to estimate input parameters to our model and compare our model predictions to actual data to further verify our model. (5) We are in the process of systematically simulating current hypotheses concerning the mechanism underlying orthostatic intolerance by matching our simulations to stand test data from astronauts pre- and post-flight. (6) We are in the process of developing a JAVA version of the simulator which will be distributed amongst the cardiovascular team members. Future work on this project involves modifications of the model to represent a rodent (rat) model, further evaluation of the bedrest astronaut and animal data, and systematic investigation of specific countermeasures.
NASA Astrophysics Data System (ADS)
Clarke, Peter; Varghese, Philip; Goldstein, David
2018-01-01
A discrete velocity method is developed for gas mixtures of diatomic molecules with both rotational and vibrational energy states. A full quantized model is described, and rotation-translation and vibration-translation energy exchanges are simulated using a Larsen-Borgnakke exchange model. Elastic and inelastic molecular interactions are modeled during every simulated collision to help produce smooth internal energy distributions. The method is verified by comparing simulations of homogeneous relaxation by our discrete velocity method to numerical solutions of the Jeans and Landau-Teller equations, and to direct simulation Monte Carlo. We compute the structure of a 1D shock using this method, and determine how the rotational energy distribution varies with spatial location in the shock and with position in velocity space.
Three-dimensional digital-computer model of the Ferron sandstone aquifer near Emery, Utah
Morrissey, Daniel J.; Lines, Gregory C.; Bartholoma, Scott D.
1980-01-01
A three-dimensional finite-difference computer model of the Ferron sandstone aquifer was used to simulate groundwater flow in the Emery coal field in east-central Utah. The model also was used to predict the effects of proposed surface mining and the resulting mine dewatering on potentiometric surfaces of the aquifer. The model was calibrated in a steady-state simulation using water levels and manmade discharges from the aquifer that were observed during 1979. Too few data were available to verify the calibrated model in a transient-state simulation with historical aquifer response to manmade discharges. Predictions made with the model are considered to be semiquantitative. Discharge from the proposed surface mine was predicted to average 0.3 cubic foot per second through 15 years of operation. Drawdowns of 5 feet in the potentiometric surface of the aquifer were predicted to extend as much as 3 miles from the proposed mine after 15 years of operation. (USGS)
Comparison of Computed and Measured Vortex Evolution for a UH-60A Rotor in Forward Flight
NASA Technical Reports Server (NTRS)
Ahmad, Jasim Uddin; Yamauchi, Gloria K.; Kao, David L.
2013-01-01
A Computational Fluid Dynamics (CFD) simulation using the Navier-Stokes equations was performed to determine the evolutionary and dynamical characteristics of the vortex flowfield for a highly flexible aeroelastic UH-60A rotor in forward flight. The experimental wake data were acquired using Particle Image Velocimetry (PIV) during a test of the fullscale UH-60A rotor in the National Full-Scale Aerodynamics Complex 40- by 80-Foot Wind Tunnel. The PIV measurements were made in a stationary cross-flow plane at 90 deg rotor azimuth. The CFD simulation was performed using the OVERFLOW CFD solver loosely coupled with the rotorcraft comprehensive code CAMRAD II. Characteristics of vortices captured in the PIV plane from different blades are compared with CFD calculations. The blade airloads were calculated using two different turbulence models. A limited spatial, temporal, and CFD/comprehensive-code coupling sensitivity analysis was performed in order to verify the unsteady helicopter simulations with a moving rotor grid system.
Simulation and Optimization of an Airfoil with Leading Edge Slat
NASA Astrophysics Data System (ADS)
Schramm, Matthias; Stoevesandt, Bernhard; Peinke, Joachim
2016-09-01
A gradient-based optimization is used in order to improve the shape of a leading edge slat upstream of a DU 91-W2-250 airfoil. The simulations are performed by solving the Reynolds-Averaged Navier-Stokes equations (RANS) using the open source CFD code OpenFOAM. Gradients are computed via the adjoint approach, which is suitable to deal with many design parameters, but keeping the computational costs low. The implementation is verified by comparing the gradients from the adjoint method with gradients obtained by finite differences for a NACA 0012 airfoil. The simulations of the leading edge slat are validated against measurements from the acoustic wind tunnel of Oldenburg University at a Reynolds number of Re = 6 • 105. The shape of the slat is optimized using the adjoint approach resulting in a drag reduction of 2%. Although the optimization is done for Re = 6 • 105, the improvements also hold for a higher Reynolds number of Re = 7.9 • 106, which is more realistic at modern wind turbines.
NASA Astrophysics Data System (ADS)
Maślak, Mariusz; Pazdanowski, Michał; Woźniczka, Piotr
2018-01-01
Validation of fire resistance for the same steel frame bearing structure is performed here using three different numerical models, i.e. a bar one prepared in the SAFIR environment, and two 3D models developed within the framework of Autodesk Simulation Mechanical (ASM) and an alternative one developed in the environment of the Abaqus code. The results of the computer simulations performed are compared with the experimental results obtained previously, in a laboratory fire test, on a structure having the same characteristics and subjected to the same heating regimen. Comparison of the experimental and numerically determined displacement evolution paths for selected nodes of the considered frame during the simulated fire exposure constitutes the basic criterion applied to evaluate the validity of the numerical results obtained. The experimental and numerically determined estimates of critical temperature specific to the considered frame and related to the limit state of bearing capacity in fire have been verified as well.
Extension of a coarse grained particle method to simulate heat transfer in fluidized beds
Lu, Liqiang; Morris, Aaron; Li, Tingwen; ...
2017-04-18
The heat transfer in a gas-solids fluidized bed is simulated with computational fluid dynamic-discrete element method (CFD-DEM) and coarse grained particle method (CGPM). In CGPM fewer numerical particles and their collisions are tracked by lumping several real particles into a computational parcel. Here, the assumption is that the real particles inside a coarse grained particle (CGP) are made from same species and share identical physical properties including density, diameter and temperature. The parcel-fluid convection term in CGPM is calculated using the same method as in DEM. For all other heat transfer mechanisms, we derive in this study mathematical expressions thatmore » relate the new heat transfer terms for CGPM to those traditionally derived in DEM. This newly derived CGPM model is verified and validated by comparing the results with CFD-DEM simulation results and experiment data. The numerical results compare well with experimental data for both hydrodynamics and temperature profiles. Finally, the proposed CGPM model can be used for fast and accurate simulations of heat transfer in large scale gas-solids fluidized beds.« less
Extension of a coarse grained particle method to simulate heat transfer in fluidized beds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Morris, Aaron; Li, Tingwen
The heat transfer in a gas-solids fluidized bed is simulated with computational fluid dynamic-discrete element method (CFD-DEM) and coarse grained particle method (CGPM). In CGPM fewer numerical particles and their collisions are tracked by lumping several real particles into a computational parcel. Here, the assumption is that the real particles inside a coarse grained particle (CGP) are made from same species and share identical physical properties including density, diameter and temperature. The parcel-fluid convection term in CGPM is calculated using the same method as in DEM. For all other heat transfer mechanisms, we derive in this study mathematical expressions thatmore » relate the new heat transfer terms for CGPM to those traditionally derived in DEM. This newly derived CGPM model is verified and validated by comparing the results with CFD-DEM simulation results and experiment data. The numerical results compare well with experimental data for both hydrodynamics and temperature profiles. Finally, the proposed CGPM model can be used for fast and accurate simulations of heat transfer in large scale gas-solids fluidized beds.« less
Effect of bed characters on the direct synthesis of dimethyldichlorosilane in fluidized bed reactor.
Zhang, Pan; Duan, Ji H; Chen, Guang H; Wang, Wei W
2015-03-06
This paper presents the numerical investigation of the effects of the general bed characteristics such as superficial gas velocities, bed temperature, bed heights and particle size, on the direct synthesis in a 3D fluidized bed reactor. A 3D model for the gas flow, heat transfer, and mass transfer was coupled to the direct synthesis reaction mechanism verified in the literature. The model was verified by comparing the simulated reaction rate and dimethyldichlorosilane (M2) selectivity with the experimental data in the open literature and real production data. Computed results indicate that superficial gas velocities, bed temperature, bed heights, and particle size have vital effect on the reaction rates and/or M2 selectivity.
Effect of Bed Characters on the Direct Synthesis of Dimethyldichlorosilane in Fluidized Bed Reactor
Zhang, Pan; Duan, Ji H.; Chen, Guang H.; Wang, Wei W.
2015-01-01
This paper presents the numerical investigation of the effects of the general bed characteristics such as superficial gas velocities, bed temperature, bed heights and particle size, on the direct synthesis in a 3D fluidized bed reactor. A 3D model for the gas flow, heat transfer, and mass transfer was coupled to the direct synthesis reaction mechanism verified in the literature. The model was verified by comparing the simulated reaction rate and dimethyldichlorosilane (M2) selectivity with the experimental data in the open literature and real production data. Computed results indicate that superficial gas velocities, bed temperature, bed heights, and particle size have vital effect on the reaction rates and/or M2 selectivity. PMID:25742729
Kosmidis, Kosmas; Argyrakis, Panos; Macheras, Panos
2003-07-01
To verify the Higuchi law and study the drug release from cylindrical and spherical matrices by means of Monte Carlo computer simulation. A one-dimensional matrix, based on the theoretical assumptions of the derivation of the Higuchi law, was simulated and its time evolution was monitored. Cylindrical and spherical three-dimensional lattices were simulated with sites at the boundary of the lattice having been denoted as leak sites. Particles were allowed to move inside it using the random walk model. Excluded volume interactions between the particles was assumed. We have monitored the system time evolution for different lattice sizes and different initial particle concentrations. The Higuchi law was verified using the Monte Carlo technique in a one-dimensional lattice. It was found that Fickian drug release from cylindrical matrices can be approximated nicely with the Weibull function. A simple linear relation between the Weibull function parameters and the specific surface of the system was found. Drug release from a matrix, as a result of a diffusion process assuming excluded volume interactions between the drug molecules, can be described using a Weibull function. This model, although approximate and semiempirical, has the benefit of providing a simple physical connection between the model parameters and the system geometry, which was something missing from other semiempirical models.
Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit
NASA Astrophysics Data System (ADS)
Vittaldev, Vivek; Russell, Ryan P.
2017-09-01
Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.
Projection matrix acquisition for cone-beam computed tomography iterative reconstruction
NASA Astrophysics Data System (ADS)
Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Shi, Wenlong; Zhang, Caixin; Gao, Zongzhao
2017-02-01
Projection matrix is an essential and time-consuming part in computed tomography (CT) iterative reconstruction. In this article a novel calculation algorithm of three-dimensional (3D) projection matrix is proposed to quickly acquire the matrix for cone-beam CT (CBCT). The CT data needed to be reconstructed is considered as consisting of the three orthogonal sets of equally spaced and parallel planes, rather than the individual voxels. After getting the intersections the rays with the surfaces of the voxels, the coordinate points and vertex is compared to obtain the index value that the ray traversed. Without considering ray-slope to voxel, it just need comparing the position of two points. Finally, the computer simulation is used to verify the effectiveness of the algorithm.
Maestro: an orchestration framework for large-scale WSN simulations.
Riliskis, Laurynas; Osipov, Evgeny
2014-03-18
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.
NASA Astrophysics Data System (ADS)
Zagorska, A.; Bliznakova, K.; Buchakliev, Z.
2015-09-01
In 2012, the International Commission on Radiological Protection has recommended a reduction of the dose limits to the eye lens for occupational exposure. Recent studies showed that in interventional rooms is possible to reach these limits especially without using protective equipment. The aim of this study was to calculate the scattered energy spectra distribution at the level of the operator's head. For this purpose, an in-house developed Monte Carlo-based computer application was used to design computational phantoms (patient and operator), the acquisition geometry as well as to simulate the photon transport through the designed system. The initial spectra from 70 kV tube voltage and 8 different filtrations were calculated according to the IPEM Report 78. An experimental study was carried out to verify the results from the simulations. The calculated scattered radiation distributions were compared to the initial incident on the patient spectra. Results showed that there is no large difference between the effective energies of the scattered spectra registered in front of the operator's head obtained from simulations of all 8 incident spectra. The results from the experimental study agreed well to simulations as well.
Maestro: An Orchestration Framework for Large-Scale WSN Simulations
Riliskis, Laurynas; Osipov, Evgeny
2014-01-01
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123
SPLASH program for three dimensional fluid dynamics with free surface boundaries
NASA Astrophysics Data System (ADS)
Yamaguchi, A.
1996-05-01
This paper describes a three dimensional computer program SPLASH that solves Navier-Stokes equations based on the Arbitrary Lagrangian Eulerian (ALE) finite element method. SPLASH has been developed for application to the fluid dynamics problems including the moving boundary of a liquid metal cooled Fast Breeder Reactor (FBR). To apply SPLASH code to the free surface behavior analysis, a capillary model using a cubic Spline function has been developed. Several sample problems, e.g., free surface oscillation, vortex shedding development, and capillary tube phenomena, are solved to verify the computer program. In the analyses, the numerical results are in good agreement with the theoretical value or experimental observance. Also SPLASH code has been applied to an analysis of a free surface sloshing experiment coupled with forced circulation flow in a rectangular tank. This is a simplified situation of the flow field in a reactor vessel of the FBR. The computational simulation well predicts the general behavior of the fluid flow inside and the free surface behavior. Analytical capability of the SPLASH code has been verified in this study and the application to more practical problems such as FBR design and safety analysis is under way.
Finite element analyses of a linear-accelerator electron gun
NASA Astrophysics Data System (ADS)
Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.
2014-02-01
Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.
Finite element analyses of a linear-accelerator electron gun.
Iqbal, M; Wasy, A; Islam, G U; Zhou, Z
2014-02-01
Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.
Boeing's Dart and Starliner Parachute System Test
2018-02-22
Boeing conducted the first in a series of reliability tests of its CST-100 Starliner flight drogue and main parachute system by releasing a long, dart-shaped test vehicle from a C-17 aircraft over Yuma, Arizona. Two more tests are planned using the dart module, as well as three similar reliability tests using a high fidelity capsule simulator designed to simulate the CST-100 Starliner capsule’s exact shape and mass. In both the dart and capsule simulator tests, the test spacecraft are released at various altitudes to test the parachute system at different deployment speeds, aerodynamic loads, and or weight demands. Data collected from each test is fed into computer models to more accurately predict parachute performance and to verify consistency from test to test.
NASA Astrophysics Data System (ADS)
Yongjie, Ding; Wuji, Peng; Liqiu, Wei; Guoshun, Sun; Hong, Li; Daren, Yu
2016-11-01
A type of Hall thruster without wall losses is designed by adding two permanent magnet rings in the magnetic circuit. The maximum strength of the magnetic field is set outside the channel. Discharge without wall losses is achieved by pushing down the magnetic field and adjusting the channel accordingly. The feasibility of the Hall thrusters without wall losses is verified via a numerical simulation. The simulation results show that the ionization region is located in the discharge channel and the acceleration region is outside the channel, which decreases the energy and flux of ions and electrons spattering on the wall. The power deposition on the channel walls can be reduced by approximately 30 times.
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Liou, Meng-Sing; Povinelli, Louis A.; Arnone, Andrea
1993-01-01
This paper reports the results of numerical simulations of steady, laminar flow over a backward-facing step. The governing equations used in the simulations are the full 'compressible' Navier-Stokes equations, solutions to which were computed by using a cell-centered, finite volume discretization. The convection terms of the governing equations were discretized by using the Advection Upwind Splitting Method (AUSM), whereas the diffusion terms were discretized using central differencing formulas. The validity and accuracy of the numerical solutions were verified by comparing the results to existing experimental data for flow at identical Reynolds numbers in the same back step geometry. The paper focuses attention on the details of the flow field near the side wall of the geometry.
NASA Technical Reports Server (NTRS)
Abedin, M. N.; Prabhu, D. R.; Winfree, W. P.; Johnston, P. H.
1992-01-01
The effect on the system acoustic response of variations in the adhesive thickness, coupling thickness, and paint thickness is considered. Both simulations and experimental measurements are used to characterize and classify A-scans from test regions, and to study the effects of various parameters such as paint thickness and epoxy thickness on the variations in the reflected signals. A 1D model of sound propagation in multilayered structures is used to verify the validity of the measured signals, and is also used to computationally generate signals for a class of test locations with gradually varying parameters. This approach exploits the ability of numerical simulations to provide a good understanding of the ultrasonic pulses reflected at disbonds.
Simulations of Low Power DIII-D Helicon Antenna Coupling
NASA Astrophysics Data System (ADS)
Smithe, David; Jenkins, Thomas
2017-10-01
We present an overview and initial progress for a new project to model coupling of the DIII-D Helicon Antenna. We lay the necessary computational groundwork for the modeling of both low-power and high power helicon antenna operation, by constructing numerical representations for both the antenna hardware and the DIII-D plasma. CAD files containing the detailed geometry of the low power antenna hardware are imported into the VSim software's FDTD plasma model. The plasma can be represented numerically by importing EQDSK or EFIT files. In addition, approximate analytic forms for the ensuing profiles and fields are constructed to facilitate parameter scans in the various regimes of anticipated antenna operation. To verify the accuracy of the numerical plasma and antenna representations, we will then run baseline simulations of low-power antenna operation, and verify that the predictions for loading, linear coupling, and mode partitioning (i.e. into helicon and slow modes) are consistent with the measurements from the low power helicon antenna experimental campaign, as well as with other independent models. Progress on these baseline simulations will be presented, and any inconsistencies and issues that arise during this process will be identified. Support provided by DOE Grant DE-SC0017843.
Lystrom, David J.
1972-01-01
Various methods of verifying real-time streamflow data are outlined in part II. Relatively large errors (those greater than 20-30 percent) can be detected readily by use of well-designed verification programs for a digital computer, and smaller errors can be detected only by discharge measurements and field observations. The capability to substitute a simulated discharge value for missing or erroneous data is incorporated in some of the verification routines described. The routines represent concepts ranging from basic statistical comparisons to complex watershed modeling and provide a selection from which real-time data users can choose a suitable level of verification.
Towards practical multiscale approach for analysis of reinforced concrete structures
NASA Astrophysics Data System (ADS)
Moyeda, Arturo; Fish, Jacob
2017-12-01
We present a novel multiscale approach for analysis of reinforced concrete structural elements that overcomes two major hurdles in utilization of multiscale technologies in practice: (1) coupling between material and structural scales due to consideration of large representative volume elements (RVE), and (2) computational complexity of solving complex nonlinear multiscale problems. The former is accomplished using a variant of computational continua framework that accounts for sizeable reinforced concrete RVEs by adjusting the location of quadrature points. The latter is accomplished by means of reduced order homogenization customized for structural elements. The proposed multiscale approach has been verified against direct numerical simulations and validated against experimental results.
Image sensor for testing refractive error of eyes
NASA Astrophysics Data System (ADS)
Li, Xiangning; Chen, Jiabi; Xu, Longyun
2000-05-01
It is difficult to detect ametropia and anisometropia for children. Image sensor for testing refractive error of eyes does not need the cooperation of children and can be used to do the general survey of ametropia and anisometropia for children. In our study, photographs are recorded by a CCD element in a digital form which can be directly processed by a computer. In order to process the image accurately by digital technique, formula considering the effect of extended light source and the size of lens aperture has been deduced, which is more reliable in practice. Computer simulation of the image sensing is made to verify the fineness of the results.
Strictly stable high order difference approximations for computational aeroacoustics
NASA Astrophysics Data System (ADS)
Müller, Bernhard; Johansson, Stefan
2005-09-01
High order finite difference approximations with improved accuracy and stability properties have been developed for computational aeroacoustics (CAA). One of our new difference operators corresponds to Tam and Webb's DRP scheme in the interior, but is modified near the boundaries to be strictly stable. A unified formulation of the nonlinear and linearized Euler equations is used, which can be extended to the Navier-Stokes equations. The approach has been verified for 1D, 2D and axisymmetric test problems. We have simulated the sound propagation from a rocket launch before lift-off. To cite this article: B. Müller, S. Johansson, C. R. Mecanique 333 (2005).
Computation of shear viscosity of colloidal suspensions by SRD-MD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laganapan, A. M. K.; Videcoq, A., E-mail: arnaud.videcoq@unilim.fr; Bienia, M.
2015-04-14
The behaviour of sheared colloidal suspensions with full hydrodynamic interactions (HIs) is numerically studied. To this end, we use the hybrid stochastic rotation dynamics-molecular dynamics (SRD-MD) method. The shear viscosity of colloidal suspensions is computed for different volume fractions, both for dilute and concentrated cases. We verify that HIs help in the collisions and the streaming of colloidal particles, thereby increasing the overall shear viscosity of the suspension. Our results show a good agreement with known experimental, theoretical, and numerical studies. This work demonstrates the ability of SRD-MD to successfully simulate transport coefficients that require correct modelling of HIs.
NASA Technical Reports Server (NTRS)
Gallardo, V. C.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.
1981-01-01
An analytical technique was developed to predict the behavior of a rotor system subjected to sudden unbalance. The technique is implemented in the Turbine Engine Transient Rotor Analysis (TETRA) computer program using the component element method. The analysis was particularly aimed toward blade-loss phenomena in gas turbine engines. A dual-rotor, casing, and pylon structure can be modeled by the computer program. Blade tip rubs, Coriolis forces, and mechanical clearances are included. The analytical system was verified by modeling and simulating actual test conditions for a rig test as well as a full-engine, blade-release demonstration.
A Computational Model of Human Table Tennis for Robot Application
NASA Astrophysics Data System (ADS)
Mülling, Katharina; Peters, Jan
Table tennis is a difficult motor skill which requires all basic components of a general motor skill learning system. In order to get a step closer to such a generic approach to the automatic acquisition and refinement of table tennis, we study table tennis from a human motor control point of view. We make use of the basic models of discrete human movement phases, virtual hitting points, and the operational timing hypothesis. Using these components, we create a computational model which is aimed at reproducing human-like behavior. We verify the functionality of this model in a physically realistic simulation of a Barrett WAM.
Nomura, Shunsuke; Hayashi, Motohiro; Ishikawa, Tatsuya; Yamaguchi, Koji; Kawamata, Takakazu
2018-05-19
Vascular and osteological parameters, such as the heights of the carotid bifurcation and distal end of the plaque, are important preoperative considerations for patients undergoing carotid stenosis procedures such as carotid endarterectomy. However, for patients with contrast contraindications such as allergies or nephropathies, three-dimensional computed tomography angiography (3D-CTA) is unavailable, and preoperative evaluation remains challenging. In the present study, we aimed to develop a preoperative simulation for use in patients with contrast-contraindicated carotid stenosis. Images from non-contrast neck CT and magnetic resonance imaging obtained without the Leksell stereotactic frame were uploaded to GammaPlan. Following delineation of various structures, we performed preoperative simulations to determine the relationships between vascular and osteological structures. We applied this technique in 10 patients with carotid stenosis to verify the accuracy of the simulation. In all patients, the GammaPlan simulation successfully visualized the heights of the carotid bifurcation and distal end of the plaque without the use of contrast medium. Furthermore, information regarding the location of internal arterial structures, such as calcifications and unstable plaques, could be incorporated into GammaPlan images. Thereafter, we verified simulation accuracy by comparing the simulation results with 3D-CTA and operative findings. Simulations created using GammaPlan can be used to obtain accurate vascular and osteological information regarding the heights of the carotid bifurcation and distal end of the plaque, without the use of contrast medium. The reconstruction of delineated structures using this technique may be effective for preoperative evaluation in patients with contrast-contraindicated carotid stenosis. Copyright © 2018 Elsevier Inc. All rights reserved.
Optical 1's and 2's complement devices using lithium-niobate-based waveguide
NASA Astrophysics Data System (ADS)
Pal, Amrindra; Kumar, Santosh; Sharma, Sandeep
2016-12-01
Optical 1's and 2's complement devices are proposed with the help of lithium-niobate-based Mach-Zehnder interferometers. It has a powerful capability of switching an optical signal from one port to the other port with the help of an electrical control signal. The paper includes the optical conversion scheme using sets of optical switches. 2's complement is common in computer systems and is used in binary subtraction and logical manipulation. The operation of the circuits is studied theoretically and analyzed through numerical simulations. The truth table of these complement methods is verified with the beam propagation method and MATLAB® simulation results.
Implementation of LSCMA adaptive array terminal for mobile satellite communications
NASA Astrophysics Data System (ADS)
Zhou, Shun; Wang, Huali; Xu, Zhijun
2007-11-01
This paper considers the application of adaptive array antenna based on the least squares constant modulus algorithm (LSCMA) for interference rejection in mobile SATCOM terminals. A two-element adaptive array scheme is implemented with a combination of ADI TS201S DSP chips and Altera Stratix II FPGA device, which makes a cooperating computation for adaptive beamforming. Its interference suppressing performance is verified via Matlab simulations. Digital hardware system is implemented to execute the operations of LSCMA beamforming algorithm that is represented by an algorithm flowchart. The result of simulations and test indicate that this scheme can improve the anti-jamming performance of terminals.
Ballistic missile precession frequency extraction by spectrogram's texture
NASA Astrophysics Data System (ADS)
Wu, Longlong; Xu, Shiyou; Li, Gang; Chen, Zengping
2013-10-01
In order to extract precession frequency, an crucial parameter in ballistic target recognition, which reflected the kinematical characteristics as well as structural and mass distribution features, we developed a dynamic RCS signal model for a conical ballistic missile warhead, with a log-norm multiplicative noise, substituting the familiar additive noise, derived formulas of micro-Doppler induced by precession motion, and analyzed time-varying micro-Doppler features utilizing time-frequency transforms, extracted precession frequency by measuring the spectrogram's texture, verified them by computer simulation studies. Simulation demonstrates the excellent performance of the method proposed in extracting the precession frequency, especially in the case of low SNR.
Stochastic Investigation of Natural Frequency for Functionally Graded Plates
NASA Astrophysics Data System (ADS)
Karsh, P. K.; Mukhopadhyay, T.; Dey, S.
2018-03-01
This paper presents the stochastic natural frequency analysis of functionally graded plates by applying artificial neural network (ANN) approach. Latin hypercube sampling is utilised to train the ANN model. The proposed algorithm for stochastic natural frequency analysis of FGM plates is validated and verified with original finite element method and Monte Carlo simulation (MCS). The combined stochastic variation of input parameters such as, elastic modulus, shear modulus, Poisson ratio, and mass density are considered. Power law is applied to distribute the material properties across the thickness. The present ANN model reduces the sample size and computationally found efficient as compared to conventional Monte Carlo simulation.
Finite Larmor radius effects on the (m = 2, n = 1) cylindrical tearing mode
NASA Astrophysics Data System (ADS)
Chen, Y.; Chowdhury, J.; Parker, S. E.; Wan, W.
2015-04-01
New field solvers are developed in the gyrokinetic code GEM [Chen and Parker, J. Comput. Phys. 220, 839 (2007)] to simulate low-n modes. A novel discretization is developed for the ion polarization term in the gyrokinetic vorticity equation. An eigenmode analysis with finite Larmor radius effects is developed to study the linear resistive tearing mode. The mode growth rate is shown to scale with resistivity as γ ˜ η1/3, the same as the semi-collisional regime in previous kinetic treatments [Drake and Lee, Phys. Fluids 20, 1341 (1977)]. Tearing mode simulations with gyrokinetic ions are verified with the eigenmode calculation.
The Use of Air Injection Nozzles for the Forced Excitation of Axial Compressor Blades
NASA Astrophysics Data System (ADS)
Raubenheimer, G. A.; van der Spuy, S. J.; von Backström, T. W.
2013-03-01
Turbomachines are exposed to many factors which may cause failure of its components. One of these, high cycle fatigue, can be caused by blade flutter. This paper evaluates the use of an air injection nozzle as a means of exciting vibrations on the first stage rotor blades of a rotating axial compressor. Unsteady simulations of the excitation velocity perturbations were performed on the Computational Fluid Dynamics (CFD) software, Numeca FINE™/Turbo. Experimental testing on a three-stage, low Mach number axial flow compressor provided data that was used to implement boundary conditions and to verify certain aspects of the unsteady simulation results.
Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers
NASA Technical Reports Server (NTRS)
Skiles, J. W.; Schulbach, C. H.
1994-01-01
Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Rij, Jennifer A; Yu, Yi-Hsiang; Guo, Yi
This study explores and verifies the generalized body-modes method for evaluating the structural loads on a wave energy converter (WEC). Historically, WEC design methodologies have focused primarily on accurately evaluating hydrodynamic loads, while methodologies for evaluating structural loads have yet to be fully considered and incorporated into the WEC design process. As wave energy technologies continue to advance, however, it has become increasingly evident that an accurate evaluation of the structural loads will enable an optimized structural design, as well as the potential utilization of composites and flexible materials, and hence reduce WEC costs. Although there are many computational fluidmore » dynamics, structural analyses and fluid-structure-interaction (FSI) codes available, the application of these codes is typically too computationally intensive to be practical in the early stages of the WEC design process. The generalized body-modes method, however, is a reduced order, linearized, frequency-domain FSI approach, performed in conjunction with the linear hydrodynamic analysis, with computation times that could realistically be incorporated into the WEC design process. The objective of this study is to verify the generalized body-modes approach in comparison to high-fidelity FSI simulations to accurately predict structural deflections and stress loads in a WEC. Two verification cases are considered, a free-floating barge and a fixed-bottom column. Details for both the generalized body-modes models and FSI models are first provided. Results for each of the models are then compared and discussed. Finally, based on the verification results obtained, future plans for incorporating the generalized body-modes method into the WEC simulation tool, WEC-Sim, and the overall WEC design process are discussed.« less
Concept verification of three dimensional free motion simulator for space robot
NASA Technical Reports Server (NTRS)
Okamoto, Osamu; Nakaya, Teruomi; Pokines, Brett
1994-01-01
In the development of automatic assembling technologies for space structures, it is an indispensable matter to investigate and simulate the movements of robot satellites concerned with mission operation. The movement investigation and simulation on the ground will be effectively realized by a free motion simulator. Various types of ground systems for simulating free motion have been proposed and utilized. Some of these methods are a neutral buoyancy system, an air or magnetic suspension system, a passive suspension balance system, and a free flying aircraft or drop tower system. In addition, systems can be simulated by computers using an analytical model. Each free motion simulation method has limitations and well known problems, specifically, disturbance by water viscosity, limited number of degrees-of-freedom, complex dynamics induced by the attachment of the simulation system, short experiment time, and the lack of high speed super-computer simulation systems, respectively. The basic idea presented here is to realize 3-dimensional free motion. This is achieved by combining a spherical air bearing, a cylindrical air bearing, and a flat air bearing. A conventional air bearing system has difficulty realizing free vertical motion suspension. The idea of free vertical suspension is that a cylindrical air bearing and counter balance weight realize vertical free motion. This paper presents a design concept, configuration, and basic performance characteristics of an innovative free motion simulator. A prototype simulator verifies the feasibility of 3-dimensional free motion simulation.
NASA Technical Reports Server (NTRS)
Minnetyan, Levon; Chamis, Christos C. (Technical Monitor)
2003-01-01
Computational simulation results can give the prediction of damage growth and progression and fracture toughness of composite structures. The experimental data from literature provide environmental effects on the fracture behavior of metallic or fiber composite structures. However, the traditional experimental methods to analyze the influence of the imposed conditions are expensive and time consuming. This research used the CODSTRAN code to model the temperature effects, scaling effects and the loading effects of fiber/braided composite specimens with and without fiber-optic sensors on the damage initiation and energy release rates. The load-displacement relationship and fracture toughness assessment approach is compared with the test results from literature and it is verified that the computational simulation, with the use of established material modeling and finite element modules, adequately tracks the changes of fracture toughness and subsequent fracture propagation for any fiber/braided composite structure due to the change of fiber orientations, presence of large diameter optical fibers, and any loading conditions.
NASA Technical Reports Server (NTRS)
Minnetyan, Levon; Chamis, Christos C. (Technical Monitor)
2003-01-01
Computational simulation results can give the prediction of damage growth and progression and fracture toughness of composite structures. The experimental data from literature provide environmental effects on the fracture behavior of metallic or fiber composite structures. However, the traditional experimental methods to analyze the influence of the imposed conditions are expensive and time consuming. This research used the CODSTRAN code to model the temperature effects, scaling effects and the loading effects of fiberbraided composite specimens with and without fiber-optic sensors on the damage initiation and energy release rates. The load-displacement relationship and fracture toughness assessment approach is compared with the test results from literature and it is verified that the computational simulation, with the use of established material modeling and finite element modules, adequately tracks the changes of fracture toughness and subsequent fracture propagation for any fiberbraided composite structure due to the change of fiber orientations, presence of large diameter optical fibers, and any loading conditions.
Exploring biological interaction networks with tailored weighted quasi-bicliques
2012-01-01
Background Biological networks provide fundamental insights into the functional characterization of genes and their products, the characterization of DNA-protein interactions, the identification of regulatory mechanisms, and other biological tasks. Due to the experimental and biological complexity, their computational exploitation faces many algorithmic challenges. Results We introduce novel weighted quasi-biclique problems to identify functional modules in biological networks when represented by bipartite graphs. In difference to previous quasi-biclique problems, we include biological interaction levels by using edge-weighted quasi-bicliques. While we prove that our problems are NP-hard, we also describe IP formulations to compute exact solutions for moderately sized networks. Conclusions We verify the effectiveness of our IP solutions using both simulation and empirical data. The simulation shows high quasi-biclique recall rates, and the empirical data corroborate the abilities of our weighted quasi-bicliques in extracting features and recovering missing interactions from biological networks. PMID:22759421
Automatic mathematical modeling for space application
NASA Technical Reports Server (NTRS)
Wang, Caroline K.
1987-01-01
A methodology for automatic mathematical modeling is described. The major objective is to create a very friendly environment for engineers to design, maintain and verify their model and also automatically convert the mathematical model into FORTRAN code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine simulation mathematical model called Propulsion System Automatic Modeling (PSAM). PSAM provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. PSAM contains an initial set of component process elements for the Space Shuttle Main Engine simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. PSAM is then able to automatically generate the model and the FORTRAN code. A future goal is to download the FORTRAN code to the VAX/VMS system for conventional computation.
Al-Sadoon, Mohammed A. G.; Zuid, Abdulkareim; Jones, Stephen M. R.; Noras, James M.
2017-01-01
This paper proposes a new low complexity angle of arrival (AOA) method for signal direction estimation in multi-element smart wireless communication systems. The new method estimates the AOAs of the received signals directly from the received signals with significantly reduced complexity since it does not need to construct the correlation matrix, invert the matrix or apply eigen-decomposition, which are computationally expensive. A mathematical model of the proposed method is illustrated and then verified using extensive computer simulations. Both linear and circular sensors arrays are studied using various numerical examples. The method is systematically compared with other common and recently introduced AOA methods over a wide range of scenarios. The simulated results show that the new method has several advantages in terms of reduced complexity and improved accuracy under the assumptions of correlated signals and limited numbers of snapshots. PMID:29140313
Al-Sadoon, Mohammed A G; Ali, Nazar T; Dama, Yousf; Zuid, Abdulkareim; Jones, Stephen M R; Abd-Alhameed, Raed A; Noras, James M
2017-11-15
This paper proposes a new low complexity angle of arrival (AOA) method for signal direction estimation in multi-element smart wireless communication systems. The new method estimates the AOAs of the received signals directly from the received signals with significantly reduced complexity since it does not need to construct the correlation matrix, invert the matrix or apply eigen-decomposition, which are computationally expensive. A mathematical model of the proposed method is illustrated and then verified using extensive computer simulations. Both linear and circular sensors arrays are studied using various numerical examples. The method is systematically compared with other common and recently introduced AOA methods over a wide range of scenarios. The simulated results show that the new method has several advantages in terms of reduced complexity and improved accuracy under the assumptions of correlated signals and limited numbers of snapshots.
Advanced control schemes and kinematic analysis for a kinematically redundant 7 DOF manipulator
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Zhou, Zhen-Lei
1990-01-01
The kinematic analysis and control of a kinematically redundant manipulator is addressed. The manipulator is the slave arm of a telerobot system recently built at Goddard Space Flight Center (GSFC) to serve as a testbed for investigating research issues in telerobotics. A forward kinematic transformation is developed in its most simplified form, suitable for real-time control applications, and the manipulator Jacobian is derived using the vector cross product method. Using the developed forward kinematic transformation and quaternion representation of orientation matrices, we perform computer simulation to evaluate the efficiency of the Jacobian in converting joint velocities into Cartesian velocities and to investigate the accuracy of Jacobian pseudo-inverse for various sampling times. The equivalence between Cartesian velocities and quaternion is also verified using computer simulation. Three control schemes are proposed and discussed for controlling the motion of the slave arm end-effector.
Octree-based Global Earthquake Simulations
NASA Astrophysics Data System (ADS)
Ramirez-Guzman, L.; Juarez, A.; Bielak, J.; Salazar Monroy, E. F.
2017-12-01
Seismological research has motivated recent efforts to construct more accurate three-dimensional (3D) velocity models of the Earth, perform global simulations of wave propagation to validate models, and also to study the interaction of seismic fields with 3D structures. However, traditional methods for seismogram computation at global scales are limited by computational resources, relying primarily on traditional methods such as normal mode summation or two-dimensional numerical methods. We present an octree-based mesh finite element implementation to perform global earthquake simulations with 3D models using topography and bathymetry with a staircase approximation, as modeled by the Carnegie Mellon Finite Element Toolchain Hercules (Tu et al., 2006). To verify the implementation, we compared the synthetic seismograms computed in a spherical earth against waveforms calculated using normal mode summation for the Preliminary Earth Model (PREM) for a point source representation of the 2014 Mw 7.3 Papanoa, Mexico earthquake. We considered a 3 km-thick ocean layer for stations with predominantly oceanic paths. Eigen frequencies and eigen functions were computed for toroidal, radial, and spherical oscillations in the first 20 branches. Simulations are valid at frequencies up to 0.05 Hz. Matching among the waveforms computed by both approaches, especially for long period surface waves, is excellent. Additionally, we modeled the Mw 9.0 Tohoku-Oki earthquake using the USGS finite fault inversion. Topography and bathymetry from ETOPO1 are included in a mesh with more than 3 billion elements; constrained by the computational resources available. We compared estimated velocity and GPS synthetics against observations at regional and teleseismic stations of the Global Seismological Network and discuss the differences among observations and synthetics, revealing that heterogeneity, particularly in the crust, needs to be considered.
Apollo experience report: Real-time auxiliary computing facility development
NASA Technical Reports Server (NTRS)
Allday, C. E.
1972-01-01
The Apollo real time auxiliary computing function and facility were an extension of the facility used during the Gemini Program. The facility was expanded to include support of all areas of flight control, and computer programs were developed for mission and mission-simulation support. The scope of the function was expanded to include prime mission support functions in addition to engineering evaluations, and the facility became a mandatory mission support facility. The facility functioned as a full scale mission support activity until after the first manned lunar landing mission. After the Apollo 11 mission, the function and facility gradually reverted to a nonmandatory, offline, on-call operation because the real time program flexibility was increased and verified sufficiently to eliminate the need for redundant computations. The evaluation of the facility and function and recommendations for future programs are discussed in this report.
MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow
NASA Astrophysics Data System (ADS)
Samani, N.; Kompani-Zare, M.; Barry, D. A.
2004-01-01
Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.
A Multiphysics Finite Element and Peridynamics Model of Dielectric Breakdown
2017-09-01
A method for simulating dielectric breakdown in solid materials is presented that couples electro-quasi-statics, the adiabatic heat equation, and...temperatures or high strains. The Kelvin force computation used in the method is verified against a 1-D solution and the linearization scheme used to treat the...plane problems, a 2-D composite capacitor with a conductive flaw, and a 3-D point–plane problem. The results show that the method is capable of
Creating executable architectures using Visual Simulation Objects (VSO)
NASA Astrophysics Data System (ADS)
Woodring, John W.; Comiskey, John B.; Petrov, Orlin M.; Woodring, Brian L.
2005-05-01
Investigations have been performed to identify a methodology for creating executable models of architectures and simulations of architecture that lead to an understanding of their dynamic properties. Colored Petri Nets (CPNs) are used to describe architecture because of their strong mathematical foundations, the existence of techniques for their verification and graph theory"s well-established history of success in modern science. CPNs have been extended to interoperate with legacy simulations via a High Level Architecture (HLA) compliant interface. It has also been demonstrated that an architecture created as a CPN can be integrated with Department of Defense Architecture Framework products to ensure consistency between static and dynamic descriptions. A computer-aided tool, Visual Simulation Objects (VSO), which aids analysts in specifying, composing and executing architectures, has been developed to verify the methodology and as a prototype commercial product.
Simulation of existing gas-fuelled conventional steam power plant using Cycle Tempo
NASA Astrophysics Data System (ADS)
Jamel, M. S.; Abd Rahman, A.; Shamsuddin, A. H.
2013-06-01
Simulation of a 200 MW gas-fuelled conventional steam power plant located in Basra, Iraq was carried out. The thermodynamic performance of the considered power plant is estimated by a system simulation. A flow-sheet computer program, "Cycle-Tempo" is used for the study. The plant components and piping systems were considered and described in detail. The simulation results were verified against data gathered from the log sheet obtained from the station during its operation hours and good results were obtained. Operational factors like the stack exhaust temperature and excess air percentage were studied and discussed, as were environmental factors, such as ambient air temperature and water inlet temperature. In addition, detailed exergy losses were illustrated and describe the temperature profiles for the main plant components. The results prompted many suggestions for improvement of the plant performance.
Design and testing of a magnetic suspension and damping system for a space telescope
NASA Technical Reports Server (NTRS)
Ockman, N. J.
1972-01-01
The basic equations of motion are derived for a two dimensional, three degree of freedom simulation of a space telescope coupled to a spacecraft by means of a magnetic suspension and isolation system. The system consists of paramagnetic or ferromagnetic discs confined to the magnetic field between two Helmholtz coils. Damping is introduced by varying the magnetic field in proportion to a velocity signal derived from the telescope. The equations of motion are nonlinear, similar in behavior to the one-dimensional Van der Pol equation. The computer simulation was verified by testing a 264-kilogram air bearing platform which simulates the telescope in a frictionless environment. The simulation demonstrated effective isolation capabilities for disturbance frequencies above resonance. Damping in the system improved the response near resonance and prevented the build-up of large oscillatory amplitudes.
Investigation, Development, and Evaluation of Performance Proving for Fault-tolerant Computers
NASA Technical Reports Server (NTRS)
Levitt, K. N.; Schwartz, R.; Hare, D.; Moore, J. S.; Melliar-Smith, P. M.; Shostak, R. E.; Boyer, R. S.; Green, M. W.; Elliott, W. D.
1983-01-01
A number of methodologies for verifying systems and computer based tools that assist users in verifying their systems were developed. These tools were applied to verify in part the SIFT ultrareliable aircraft computer. Topics covered included: STP theorem prover; design verification of SIFT; high level language code verification; assembly language level verification; numerical algorithm verification; verification of flight control programs; and verification of hardware logic.
A Survey of the Isentropic Euler Vortex Problem Using High-Order Methods
NASA Technical Reports Server (NTRS)
Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.
2015-01-01
The flux reconstruction (FR) method offers a simple, efficient, and easy to implement method, and it has been shown to equate to a differential approach to discontinuous Galerkin (DG) methods. The FR method is also accurate to an arbitrary order and the isentropic Euler vortex problem is used here to empirically verify this claim. This problem is widely used in computational fluid dynamics (CFD) to verify the accuracy of a given numerical method due to its simplicity and known exact solution at any given time. While verifying our FR solver, multiple obstacles emerged that prevented us from achieving the expected order of accuracy over short and long amounts of simulation time. It was found that these complications stemmed from a few overlooked details in the original problem definition combined with the FR and DG methods achieving high-accuracy with minimal dissipation. This paper is intended to consolidate the many versions of the vortex problem found in literature and to highlight some of the consequences if these overlooked details remain neglected.
Study on photon transport problem based on the platform of molecular optical simulation environment.
Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie
2010-01-01
As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SP(n)), and physical measurement to verify the performance of our study method on both accuracy and efficiency.
Study on Photon Transport Problem Based on the Platform of Molecular Optical Simulation Environment
Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie
2010-01-01
As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (S P n), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737
NASA/MSFC ground experiment for large space structure control verification
NASA Technical Reports Server (NTRS)
Waites, H. B.; Seltzer, S. M.; Tollison, D. K.
1984-01-01
Marshall Space Flight Center has developed a facility in which closed loop control of Large Space Structures (LSS) can be demonstrated and verified. The main objective of the facility is to verify LSS control system techniques so that on orbit performance can be ensured. The facility consists of an LSS test article which is connected to a payload mounting system that provides control torque commands. It is attached to a base excitation system which will simulate disturbances most likely to occur for Orbiter and DOD payloads. A control computer will contain the calibration software, the reference system, the alignment procedures, the telemetry software, and the control algorithms. The total system will be suspended in such a fashion that LSS test article has the characteristics common to all LSS.
Di Giorgio Silva, Luiza Wanick; Aprigio, Danielle; Di Giacomo, Jesse; Gongora, Mariana; Budde, Henning; Bittencourt, Juliana; Cagy, Mauricio; Teixeira, Silmar; Ribeiro, Pedro; de Carvalho, Marcele Regine; Freire, Rafael; Nardi, Antonio Egidio; Basile, Luis Fernando; Velasques, Bruna
2017-12-01
Panic disorder (PD) is characterized by repeated and unexpected attacks of intense anxiety, which are not restricted to a determined situation or circumstance. The coherence function has been used to investigate the communication among brain structures through the quantitative EEG (qEEG). The objective of this study is to analyze if there is a difference in frontoparietal gamma coherence (GC) between panic disorder patients (PDP) and healthy controls (HC) during the Visual oddball paradigm; and verify if high levels of anxiety (produced by a computer simulation) affect PDP's working memory. Nine PDP (9 female with average age of 48.8, SD: 11.16) and ten HC (1 male and 9 female with average age of 38.2, SD: 13.69) were enrolled in this study. The subjects performed the visual oddball paradigm simultaneously to the EEG record before and after the presentation of computer simulation (CS). A two-way ANOVA was applied to analyze the factors Group and the Moment for each pair of electrodes separately, and another one to analyze the reaction time variable. We verified a F3-P3 GC increased after the CS movie, demonstrating the left hemisphere participation during the anxiety processing. The greater GC in HC observed in the frontal and parietal areas (P3-Pz, F4-F8 and Fp2-F4) points to the participation of these areas with the expected behavior. The greater GC in PDP for F7-F3 and F4-P4 pairs of electrodes assumes that it produces a prejudicial "noise" during information processing, and can be associated to interference on the communication between frontal and parietal areas. This "noise" during information processing is related to PD symptoms, which should be better known in order to develop effective treatment strategies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fast simulation of yttrium-90 bremsstrahlung photons with GATE.
Rault, Erwann; Staelens, Steven; Van Holen, Roel; De Beenhouwer, Jan; Vandenberghe, Stefaan
2010-06-01
Multiple investigators have recently reported the use of yttrium-90 (90Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging for the dosimetry of targeted radionuclide therapies. Because Monte Carlo (MC) simulations are useful for studying SPECT imaging, this study investigates the MC simulation of 90Y bremsstrahlung photons in SPECT. To overcome the computationally expensive simulation of electrons, the authors propose a fast way to simulate the emission of 90Y bremsstrahlung photons based on prerecorded bremsstrahlung photon probability density functions (PDFs). The accuracy of bremsstrahlung photon simulation is evaluated in two steps. First, the validity of the fast bremsstrahlung photon generator is checked. To that end, fast and analog simulations of photons emitted from a 90Y point source in a water phantom are compared. The same setup is then used to verify the accuracy of the bremsstrahlung photon simulations, comparing the results obtained with PDFs generated from both simulated and measured data to measurements. In both cases, the energy spectra and point spread functions of the photons detected in a scintillation camera are used. Results show that the fast simulation method is responsible for a 5% overestimation of the low-energy fluence (below 75 keV) of the bremsstrahlung photons detected using a scintillation camera. The spatial distribution of the detected photons is, however, accurately reproduced with the fast method and a computational acceleration of approximately 17-fold is achieved. When measured PDFs are used in the simulations, the simulated energy spectrum of photons emitted from a point source of 90Y in a water phantom and detected in a scintillation camera closely approximates the measured spectrum. The PSF of the photons imaged in the 50-300 keV energy window is also accurately estimated with a 12.4% underestimation of the full width at half maximum and 4.5% underestimation of the full width at tenth maximum. Despite its limited accuracy, the fast bremsstrahlung photon generator is well suited for the simulation of bremsstrahlung photons emitted in large homogeneous organs, such as the liver, and detected in a scintillation camera. The computational acceleration makes it very useful for future investigations of 90Y bremsstrahlung SPECT imaging.
Progress towards computer simulation of NiH2 battery performance over life
NASA Technical Reports Server (NTRS)
Zimmerman, Albert H.; Quinzio, M. V.
1995-01-01
The long-term performance of rechargeable battery cells has traditionally been verified through life-testing, a procedure that generally requires significant commitments of funding and test resources. In the situation of nickel hydrogen battery cells, which have the capability of providing extremely long cycle life, the time and cost required to conduct even accelerated testing has become a serious impediment to transitioning technology improvements into spacecraft applications. The utilization of computer simulations to indicate the changes in performance to be expected in response to design or operating changes in nickel hydrogen cells is therefore a particularly attractive tool in advanced battery development, as well as for verifying performance in different applications. Computer-based simulations of the long-term performance of rechargeable battery cells have typically had very limited success in the past. There are a number of reasons for the lack in progress in this area. First, and probably most important, all battery cells are relatively complex electrochemical systems, in which performance is dictated by a large number of interacting physical and chemical processes. While the complexity alone is a significant part of the problem, in many instances the fundamental chemical and physical processes underlying long-term degradation and its effects on performance have not even been understood. Second, while specific chemical and physical changes within cell components have been associated with degradation, there has been no generalized simulation architecture that enables the chemical and physical structure (and changes therein) to be translated into cell performance. For the nickel hydrogen battery cell, our knowledge of the underlying reactions that control the performance of this cell has progressed to where it clearly is possible to model them. The recent development of a relative generalized cell modelling approach provides the framework for translating the chemical and physical structure of the components inside a cell into its performance characteristics over its entire cycle life. This report describes our approach to this task in terms of defining those processes deemed critical in controlling performance over life, and the model architecture required to translate the fundamental cell processes into performance profiles.
Coupled circuit numerical analysis of eddy currents in an open MRI system.
Akram, Md Shahadat Hossain; Terada, Yasuhiko; Keiichiro, Ishi; Kose, Katsumi
2014-08-01
We performed a new coupled circuit numerical simulation of eddy currents in an open compact magnetic resonance imaging (MRI) system. Following the coupled circuit approach, the conducting structures were divided into subdomains along the length (or width) and the thickness, and by implementing coupled circuit concepts we have simulated transient responses of eddy currents for subdomains in different locations. We implemented the Eigen matrix technique to solve the network of coupled differential equations to speed up our simulation program. On the other hand, to compute the coupling relations between the biplanar gradient coil and any other conducting structure, we implemented the solid angle form of Ampere's law. We have also calculated the solid angle for three dimensions to compute inductive couplings in any subdomain of the conducting structures. Details of the temporal and spatial distribution of the eddy currents were then implemented in the secondary magnetic field calculation by the Biot-Savart law. In a desktop computer (Programming platform: Wolfram Mathematica 8.0®, Processor: Intel(R) Core(TM)2 Duo E7500 @ 2.93GHz; OS: Windows 7 Professional; Memory (RAM): 4.00GB), it took less than 3min to simulate the entire calculation of eddy currents and fields, and approximately 6min for X-gradient coil. The results are given in the time-space domain for both the direct and the cross-terms of the eddy current magnetic fields generated by the Z-gradient coil. We have also conducted free induction decay (FID) experiments of eddy fields using a nuclear magnetic resonance (NMR) probe to verify our simulation results. The simulation results were found to be in good agreement with the experimental results. In this study we have also conducted simulations for transient and spatial responses of secondary magnetic field induced by X-gradient coil. Our approach is fast and has much less computational complexity than the conventional electromagnetic numerical simulation methods. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Biao; Jing, Zhenxue; Smith, Andrew
2005-04-01
Contrast enhanced digital mammography (CEDM), which is based upon the analysis of a series of x-ray projection images acquired before/after the administration of contrast agents, may provide physicians critical physiologic and morphologic information of breast lesions to determine the malignancy of lesions. This paper proposes to combine the kinetic analysis (KA) of contrast agent uptake/washout process and the dual-energy (DE) contrast enhancement together to formulate a hybrid contrast enhanced breast-imaging framework. The quantitative characteristics of materials and imaging components in the x-ray imaging chain, including x-ray tube (tungsten) spectrum, filter, breast tissues/lesions, contrast agents (non-ionized iodine solution), and selenium detector, were systematically modeled. The contrast-noise-ration (CNR) of iodinated lesions and mean absorbed glandular dose were estimated mathematically. The x-ray techniques optimization was conducted through a series of computer simulations to find the optimal tube voltage, filter thickness, and exposure levels for various breast thicknesses, breast density, and detectable contrast agent concentration levels in terms of detection efficiency (CNR2/dose). A phantom study was performed on a modified Selenia full field digital mammography system to verify the simulated results. The dose level was comparable to the dose in diagnostic mode (less than 4 mGy for an average 4.2 cm compressed breast). The results from the computer simulations and phantom study are being used to optimize an ongoing clinical study.
Kilinc, Deniz; Demir, Alper
2017-08-01
The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.
Finite element analyses of a linear-accelerator electron gun
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iqbal, M., E-mail: muniqbal.chep@pu.edu.pk, E-mail: muniqbal@ihep.ac.cn; Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049; Wasy, A.
Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gunmore » is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.« less
Bubbling in vibrated granular films.
Zamankhan, Piroz
2011-02-01
With the help of experiments, computer simulations, and a theoretical investigation, a general model is developed of the flow dynamics of dense granular media immersed in air in an intermediate regime where both collisional and frictional interactions may affect the flow behavior. The model is tested using the example of a system in which bubbles and solid structures are produced in granular films shaken vertically. Both experiments and large-scale, three-dimensional simulations of this system are performed. The experimental results are compared with the results of the simulation to verify the validity of the model. The data indicate evidence of formation of bubbles when peak acceleration relative to gravity exceeds a critical value Γ(b). The air-grain interfaces of bubblelike structures are found to exhibit fractal structure with dimension D=1.7±0.05.
Influence of ionization on the Gupta and on the Park chemical models
NASA Astrophysics Data System (ADS)
Morsa, Luigi; Zuppardi, Gennaro
2014-12-01
This study is an extension of former works by the present authors, in which the influence of the chemical models by Gupta and by Park was evaluated on thermo-fluid-dynamic parameters in the flow field, including transport coefficients, related characteristic numbers and heat flux on two current capsules (EXPERT and Orion) during the high altitude re-entry path. The results verified that the models, even computing different air compositions in the flow field, compute only slight different compositions on the capsule surface, therefore the difference in the heat flux is not very relevant. In the above mentioned studies, ionization was neglected because the velocities of the capsules (about 5000 m/s for EXPERT and about 7600 m/s for Orion) were not high enough to activate meaningful ionization. The aim of the present work is to evaluate the incidence of ionization, linked to the chemical models by Gupta and by Park, on both heat flux and thermo fluid-dynamic parameters. The present computer tests were carried out by a direct simulation Monte Carlo code (DS2V) in the velocity interval 7600-12000 m/s, considering only the Orion capsule at an altitude of 85 km. The results verified what already found namely when ionization is not considered, the chemical models compute only a slight different gas composition in the core of the shock wave and practically the same composition on the surface therefore the same heat flux. On the opposite, the results verified that when ionization is considered, the chemical models compute different compositions in the whole shock layer and on the surface therefore different heat flux. The analysis of the results relies on a qualitative and a quantitative evaluation of the effects of ionization on both chemical models. The main result of the study is that when ionization is taken into account, the Park model is more reactive than the Gupta model; consequently, the heat flux computed by Park is lower than the one computed by Gupta; using the Gupta model, in the design of a thermal protection system, is recommended.
An application of high authority/low authority control and positivity
NASA Technical Reports Server (NTRS)
Seltzer, S. M.; Irwin, D.; Tollison, D.; Waites, H. B.
1988-01-01
Control Dynamics Company (CDy), in conjunction with NASA Marshall Space Flight Center (MSFC), has supported the U.S. Air Force Wright Aeronautical Laboratory (AFWAL) in conducting an investigation of the implementation of several DOD controls techniques. These techniques are to provide vibration suppression and precise attitude control for flexible space structures. AFWAL issued a contract to Control Dynamics to perform this work under the Active Control Technique Evaluation for Spacecraft (ACES) Program. The High Authority Control/Low Authority Control (HAC/LAC) and Positivity controls techniques, which were cultivated under the DARPA Active Control of Space Structures (ACOSS) Program, were applied to a structural model of the NASA/MSFC Ground Test Facility ACES configuration. The control systems design were accomplished and linear post-analyses of the closed-loop systems are provided. The control system designs take into account effects of sampling and delay in the control computer. Nonlinear simulation runs were used to verify the control system designs and implementations in the facility control computers. Finally, test results are given to verify operations of the control systems in the test facility.
VARTM Model Development and Verification
NASA Technical Reports Server (NTRS)
Cano, Roberto J. (Technical Monitor); Dowling, Norman E.
2004-01-01
In this investigation, a comprehensive Vacuum Assisted Resin Transfer Molding (VARTM) process simulation model was developed and verified. The model incorporates resin flow through the preform, compaction and relaxation of the preform, and viscosity and cure kinetics of the resin. The computer model can be used to analyze the resin flow details, track the thickness change of the preform, predict the total infiltration time and final fiber volume fraction of the parts, and determine whether the resin could completely infiltrate and uniformly wet out the preform.
Solving constant-coefficient differential equations with dielectric metamaterials
NASA Astrophysics Data System (ADS)
Zhang, Weixuan; Qu, Che; Zhang, Xiangdong
2016-07-01
Recently, the concept of metamaterial analog computing has been proposed (Silva et al 2014 Science 343 160-3). Some mathematical operations such as spatial differentiation, integration, and convolution, have been performed by using designed metamaterial blocks. Motivated by this work, we propose a practical approach based on dielectric metamaterial to solve differential equations. The ordinary differential equation can be solved accurately by the correctly designed metamaterial system. The numerical simulations using well-established numerical routines have been performed to successfully verify all theoretical analyses.
NASA Astrophysics Data System (ADS)
Gan, Chenquan; Yang, Xiaofan
2015-05-01
In this paper, a new computer virus propagation model, which incorporates the effects of removable storage media and antivirus software, is proposed and analyzed. The global stability of the unique equilibrium of the model is independent of system parameters. Numerical simulations not only verify this result, but also illustrate the influences of removable storage media and antivirus software on viral spread. On this basis, some applicable measures for suppressing virus prevalence are suggested.
Numerical prediction of meteoric infrasound signatures
NASA Astrophysics Data System (ADS)
Nemec, Marian; Aftosmis, Michael J.; Brown, Peter G.
2017-06-01
We present a thorough validation of a computational approach to predict infrasonic signatures of centimeter-sized meteoroids. This is the first direct comparison of computational results with well-calibrated observations that include trajectories, optical masses and ground pressure signatures. We assume that the energy deposition along the meteor trail is dominated by atmospheric drag and simulate a steady, inviscid flow of air in thermochemical equilibrium to compute a near-body pressure signature of the meteoroid. This signature is then propagated through a stratified and windy atmosphere to the ground using a methodology from aircraft sonic-boom analysis. The results show that when the source of the signature is the cylindrical Mach-cone, the simulations closely match the observations. The prediction of the shock rise-time, the zero-peak amplitude of the waveform and the duration of the positive pressure phase are consistently within 10% of the measurements. Uncertainty in primarily the shape of the meteoroid results in a poorer prediction of the trailing part of the waveform. Overall, our results independently verify energy deposition estimates deduced from optical observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; et al.
An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less
Verification of Small Hole Theory for Application to Wire Chaffing Resulting in Shield Faults
NASA Technical Reports Server (NTRS)
Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.
2011-01-01
Our work is focused upon developing methods for wire chafe fault detection through the use of reflectometry to assess shield integrity. When shielded electrical aircraft wiring first begins to chafe typically the resulting evidence is small hole(s) in the shielding. We are focused upon developing algorithms and the signal processing necessary to first detect these small holes prior to incurring damage to the inner conductors. Our approach has been to develop a first principles physics model combined with probabilistic inference, and to verify this model with laboratory experiments as well as through simulation. Previously we have presented the electromagnetic small-hole theory and how it might be applied to coaxial cable. In this presentation, we present our efforts to verify this theoretical approach with high-fidelity electromagnetic simulations (COMSOL). Laboratory observations are used to parameterize the computationally efficient theoretical model with probabilistic inference resulting in quantification of hole size and location. Our efforts in characterizing faults in coaxial cable are subsequently leading to fault detection in shielded twisted pair as well as analysis of intermittent faulty connectors using similar techniques.
Homogeneity and EPR metrics for assessment of regular grids used in CW EPR powder simulations.
Crăciun, Cora
2014-08-01
CW EPR powder spectra may be approximated numerically using a spherical grid and a Voronoi tessellation-based cubature. For a given spin system, the quality of simulated EPR spectra depends on the grid type, size, and orientation in the molecular frame. In previous work, the grids used in CW EPR powder simulations have been compared mainly from geometric perspective. However, some grids with similar homogeneity degree generate different quality simulated spectra. This paper evaluates the grids from EPR perspective, by defining two metrics depending on the spin system characteristics and the grid Voronoi tessellation. The first metric determines if the grid points are EPR-centred in their Voronoi cells, based on the resonance magnetic field variations inside these cells. The second metric verifies if the adjacent Voronoi cells of the tessellation are EPR-overlapping, by computing the common range of their resonance magnetic field intervals. Beside a series of well known regular grids, the paper investigates a modified ZCW grid and a Fibonacci spherical code, which are new in the context of EPR simulations. For the investigated grids, the EPR metrics bring more information than the homogeneity quantities and are better related to the grids' EPR behaviour, for different spin system symmetries. The metrics' efficiency and limits are finally verified for grids generated from the initial ones, by using the original or magnetic field-constraint variants of the Spherical Centroidal Voronoi Tessellation method. Copyright © 2014 Elsevier Inc. All rights reserved.
Gohean, Jeffrey R; George, Mitchell J; Pate, Thomas D; Kurusz, Mark; Longoria, Raul G; Smalling, Richard W
2013-01-01
The purpose of this investigation is to use a computational model to compare a synchronized valveless pulsatile left ventricular assist device with continuous flow left ventricular assist devices at the same level of device flow, and to verify the model with in vivo porcine data. A dynamic system model of the human cardiovascular system was developed to simulate the support of a healthy or failing native heart from a continuous flow left ventricular assist device or a synchronous pulsatile valveless dual-piston positive displacement pump. These results were compared with measurements made during in vivo porcine experiments. Results from the simulation model and from the in vivo counterpart show that the pulsatile pump provides higher cardiac output, left ventricular unloading, cardiac pulsatility, and aortic valve flow as compared with the continuous flow model at the same level of support. The dynamic system model developed for this investigation can effectively simulate human cardiovascular support by a synchronous pulsatile or continuous flow ventricular assist device.
Gohean, Jeffrey R.; George, Mitchell J.; Pate, Thomas D.; Kurusz, Mark; Longoria, Raul G.; Smalling, Richard W.
2012-01-01
The purpose of this investigation is to utilize a computational model to compare a synchronized valveless pulsatile left ventricular assist device to continuous flow left ventricular assist devices at the same level of device flow, and to verify the model with in vivo porcine data. A dynamic system model of the human cardiovascular system was developed to simulate support of a healthy or failing native heart from a continuous flow left ventricular assist device or a synchronous, pulsatile, valveless, dual piston positive displacement pump. These results were compared to measurements made during in vivo porcine experiments. Results from the simulation model and from the in vivo counterpart show that the pulsatile pump provides higher cardiac output, left ventricular unloading, cardiac pulsatility, and aortic valve flow as compared to the continuous flow model at the same level of support. The dynamic system model developed for this investigation can effectively simulate human cardiovascular support by a synchronous pulsatile or continuous flow ventricular assist device. PMID:23438771
Nishio, Yousuke; Usuda, Yoshihiro; Matsui, Kazuhiko; Kurata, Hiroyuki
2008-01-01
The phosphotransferase system (PTS) is the sugar transportation machinery that is widely distributed in prokaryotes and is critical for enhanced production of useful metabolites. To increase the glucose uptake rate, we propose a rational strategy for designing the molecular architecture of the Escherichia coli glucose PTS by using a computer-aided design (CAD) system and verified the simulated results with biological experiments. CAD supports construction of a biochemical map, mathematical modeling, simulation, and system analysis. Assuming that the PTS aims at controlling the glucose uptake rate, the PTS was decomposed into hierarchical modules, functional and flux modules, and the effect of changes in gene expression on the glucose uptake rate was simulated to make a rational strategy of how the gene regulatory network is engineered. Such design and analysis predicted that the mlc knockout mutant with ptsI gene overexpression would greatly increase the specific glucose uptake rate. By using biological experiments, we validated the prediction and the presented strategy, thereby enhancing the specific glucose uptake rate. PMID:18197177
AMPS data management concepts. [Atmospheric, Magnetospheric and Plasma in Space experiment
NASA Technical Reports Server (NTRS)
Metzelaar, P. N.
1975-01-01
Five typical AMPS experiments were formulated to allow simulation studies to verify data management concepts. Design studies were conducted to analyze these experiments in terms of the applicable procedures, data processing and displaying functions. Design concepts for AMPS data management system are presented which permit both automatic repetitive measurement sequences and experimenter-controlled step-by-step procedures. Extensive use is made of a cathode ray tube display, the experimenters' alphanumeric keyboard, and the computer. The types of computer software required by the system and the possible choices of control and display procedures available to the experimenter are described for several examples. An electromagnetic wave transmission experiment illustrates the methods used to analyze data processing requirements.
Predictive Control of Networked Multiagent Systems via Cloud Computing.
Liu, Guo-Ping
2017-01-18
This paper studies the design and analysis of networked multiagent predictive control systems via cloud computing. A cloud predictive control scheme for networked multiagent systems (NMASs) is proposed to achieve consensus and stability simultaneously and to compensate for network delays actively. The design of the cloud predictive controller for NMASs is detailed. The analysis of the cloud predictive control scheme gives the necessary and sufficient conditions of stability and consensus of closed-loop networked multiagent control systems. The proposed scheme is verified to characterize the dynamical behavior and control performance of NMASs through simulations. The outcome provides a foundation for the development of cooperative and coordinative control of NMASs and its applications.
NASA Astrophysics Data System (ADS)
Nakatsuji, Noriaki; Matsushima, Kyoji
2017-03-01
Full-parallax high-definition CGHs composed of more than billion pixels were so far created only by the polygon-based method because of its high performance. However, GPUs recently allow us to generate CGHs much faster by the point cloud. In this paper, we measure computation time of object fields for full-parallax high-definition CGHs, which are composed of 4 billion pixels and reconstruct the same scene, by using the point cloud with GPU and the polygon-based method with CPU. In addition, we compare the optical and simulated reconstructions between CGHs created by these techniques to verify the image quality.
Computer programs for generation and evaluation of near-optimum vertical flight profiles
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Waters, M. H.; Patmore, L. C.
1983-01-01
Two extensive computer programs were developed. The first, called OPTIM, generates a reference near-optimum vertical profile, and it contains control options so that the effects of various flight constraints on cost performance can be examined. The second, called TRAGEN, is used to simulate an aircraft flying along an optimum or any other vertical reference profile. TRAGEN is used to verify OPTIM's output, examine the effects of uncertainty in the values of parameters (such as prevailing wind) which govern the optimum profile, or compare the cost performance of profiles generated by different techniques. A general description of these programs, the efforts to add special features to them, and sample results of their usage are presented.
NASA Astrophysics Data System (ADS)
Ouyang, Chaojun; He, Siming; Xu, Qiang; Luo, Yu; Zhang, Wencheng
2013-03-01
A two-dimensional mountainous mass flow dynamic procedure solver (Massflow-2D) using the MacCormack-TVD finite difference scheme is proposed. The solver is implemented in Matlab on structured meshes with variable computational domain. To verify the model, a variety of numerical test scenarios, namely, the classical one-dimensional and two-dimensional dam break, the landslide in Hong Kong in 1993 and the Nora debris flow in the Italian Alps in 2000, are executed, and the model outputs are compared with published results. It is established that the model predictions agree well with both the analytical solution as well as the field observations.
Arithmetic Circuit Verification Based on Symbolic Computer Algebra
NASA Astrophysics Data System (ADS)
Watanabe, Yuki; Homma, Naofumi; Aoki, Takafumi; Higuchi, Tatsuo
This paper presents a formal approach to verify arithmetic circuits using symbolic computer algebra. Our method describes arithmetic circuits directly with high-level mathematical objects based on weighted number systems and arithmetic formulae. Such circuit description can be effectively verified by polynomial reduction techniques using Gröbner Bases. In this paper, we describe how the symbolic computer algebra can be used to describe and verify arithmetic circuits. The advantageous effects of the proposed approach are demonstrated through experimental verification of some arithmetic circuits such as multiply-accumulator and FIR filter. The result shows that the proposed approach has a definite possibility of verifying practical arithmetic circuits.
Adaptive-Grid Methods for Phase Field Models of Microstructure Development
NASA Technical Reports Server (NTRS)
Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.
1999-01-01
In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.
Investigation of BPF algorithm in cone-beam CT with 2D general trajectories.
Zou, Jing; Gui, Jianbao; Rong, Junyan; Hu, Zhanli; Zhang, Qiyang; Xia, Dan
2012-01-01
A mathematical derivation was conducted to illustrate that exact 3D image reconstruction could be achieved for z-homogeneous phantoms from data acquired with 2D general trajectories using the back projection filtration (BPF) algorithm. The conclusion was verified by computer simulation and experimental result with a circular scanning trajectory. Furthermore, the effect of the non-uniform degree along z-axis of the phantoms on the accuracy of the 3D reconstruction by BPF algorithm was investigated by numerical simulation with a gradual-phantom and a disk-phantom. The preliminary result showed that the performance of BPF algorithm improved with the z-axis homogeneity of the scanned object.
Finite Larmor radius effects on the (m = 2, n = 1) cylindrical tearing mode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y.; Chowdhury, J.; Parker, S. E.
2015-04-15
New field solvers are developed in the gyrokinetic code GEM [Chen and Parker, J. Comput. Phys. 220, 839 (2007)] to simulate low-n modes. A novel discretization is developed for the ion polarization term in the gyrokinetic vorticity equation. An eigenmode analysis with finite Larmor radius effects is developed to study the linear resistive tearing mode. The mode growth rate is shown to scale with resistivity as γ ∼ η{sup 1∕3}, the same as the semi-collisional regime in previous kinetic treatments [Drake and Lee, Phys. Fluids 20, 1341 (1977)]. Tearing mode simulations with gyrokinetic ions are verified with the eigenmode calculation.
NASA Astrophysics Data System (ADS)
Irwandi, Irwandi; Fashbir; Daryono
2018-04-01
Neo-Deterministic Seismic Hazard Assessment (NDSHA) method is a seismic hazard assessment method that has an advantage on realistic physical simulation of the source, propagation, and geological-geophysical structure. This simulation is capable on generating the synthetics seismograms at the sites that being observed. At the regional NDSHA scale, calculation of the strong ground motion is based on 1D modal summation technique because it is more efficient in computation. In this article, we verify the result of synthetic seismogram calculations with the result of field observations when Pidie Jaya earthquake on 7 December 2016 occurred with the moment magnitude of M6.5. Those data were recorded by broadband seismometers installed by BMKG (Indonesian Agency for Meteorology, Climatology and Geophysics). The result of the synthetic seismogram calculations verifies that some stations well show the suitability with observation while some other stations show the discrepancies with observation results. Based on the results of the observation of some stations, evidently 1D modal summation technique method has been well verified for thin sediment region (near the pre-tertiary basement), but less suitable for thick sediment region. The reason is that the 1D modal summation technique excludes the amplification effect of seismic wave occurring within thick sediment region. So, another approach is needed, e.g., 2D finite difference hybrid method, which is a part of local scale NDSHA method.
Detached Eddy Simulation of the UH-60 Rotor Wake Using Adaptive Mesh Refinement
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.; Ahmad, Jasim U.
2012-01-01
Time-dependent Navier-Stokes flow simulations have been carried out for a UH-60 rotor with simplified hub in forward flight and hover flight conditions. Flexible rotor blades and flight trim conditions are modeled and established by loosely coupling the OVERFLOW Computational Fluid Dynamics (CFD) code with the CAMRAD II helicopter comprehensive code. High order spatial differences, Adaptive Mesh Refinement (AMR), and Detached Eddy Simulation (DES) are used to obtain highly resolved vortex wakes, where the largest turbulent structures are captured. Special attention is directed towards ensuring the dual time accuracy is within the asymptotic range, and verifying the loose coupling convergence process using AMR. The AMR/DES simulation produced vortical worms for forward flight and hover conditions, similar to previous results obtained for the TRAM rotor in hover. AMR proved to be an efficient means to capture a rotor wake without a priori knowledge of the wake shape.
Simulation and statistics: Like rhythm and song
NASA Astrophysics Data System (ADS)
Othman, Abdul Rahman
2013-04-01
Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.
Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation.
Ziegenhein, Peter; Pirner, Sven; Ph Kamerling, Cornelis; Oelfke, Uwe
2015-08-07
Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37[Formula: see text] compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25[Formula: see text] and 1.95[Formula: see text] faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.
Hardware based redundant multi-threading inside a GPU for improved reliability
Sridharan, Vilas; Gurumurthi, Sudhanva
2015-05-05
A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.
Design and Experimental Study of an Over-Under TBCC Exhaust System.
Mo, Jianwei; Xu, Jinglei; Zhang, Liuhuan
2014-01-01
Turbine-based combined-cycle (TBCC) propulsion systems have been a topic of research as a means for more efficient flight at supersonic and hypersonic speeds. The present study focuses on the fundamental physics of the complex flow in the TBCC exhaust system during the transition mode as the turbine exhaust is shut off and the ramjet exhaust is increased. A TBCC exhaust system was designed using methods of characteristics (MOC) and subjected to experimental and computational study. The main objectives of the study were: (1) to identify the interactions between the two exhaust jet streams during the transition mode phase and their effects on the whole flow-field structure; (2) to determine and verify the aerodynamic performance of the over-under TBCC exhaust nozzle; and (3) to validate the simulation ability of the computational fluid dynamics (CFD) software according to the experimental conditions. Static pressure taps and Schlieren apparatus were employed to obtain the wall pressure distributions and flow-field structures. Steady-state tests were performed with the ramjet nozzle cowl at six different positions at which the turbine flow path were half closed and fully opened, respectively. Methods of CFD were used to simulate the exhaust flow and they complemented the experimental study by providing greater insight into the details of the flow field and a means of verifying the experimental results. Results indicated that the flow structure was complicated because the two exhaust jet streams interacted with each other during the exhaust system mode transition. The exhaust system thrust coefficient varied from 0.9288 to 0.9657 during the process. The CFD simulation results agree well with the experimental data, which demonstrated that the CFD methods were effective in evaluating the aerodynamic performance of the TBCC exhaust system during the mode transition.
Virtual Instrument Simulator for CERES
NASA Technical Reports Server (NTRS)
Chapman, John J.
1997-01-01
A benchtop virtual instrument simulator for CERES (Clouds and the Earth's Radiant Energy System) has been built at NASA, Langley Research Center in Hampton, VA. The CERES instruments will fly on several earth orbiting platforms notably NASDA's Tropical Rainfall Measurement Mission (TRMM) and NASA's Earth Observing System (EOS) satellites. CERES measures top of the atmosphere radiative fluxes using microprocessor controlled scanning radiometers. The CERES Virtual Instrument Simulator consists of electronic circuitry identical to the flight unit's twin microprocessors and telemetry interface to the supporting spacecraft electronics and two personal computers (PC) connected to the I/O ports that control azimuth and elevation gimbals. Software consists of the unmodified TRW developed Flight Code and Ground Support Software which serves as the instrument monitor and NASA/TRW developed engineering models of the scanners. The CERES Instrument Simulator will serve as a testbed for testing of custom instrument commands intended to solve in-flight anomalies of the instruments which could arise during the CERES mission. One of the supporting computers supports the telemetry display which monitors the simulator microprocessors during the development and testing of custom instrument commands. The CERES engineering development software models have been modified to provide a virtual instrument running on a second supporting computer linked in real time to the instrument flight microprocessor control ports. The CERES Instrument Simulator will be used to verify memory uploads by the CERES Flight Operations TEAM at NASA. Plots of the virtual scanner models match the actual instrument scan plots. A high speed logic analyzer has been used to track the performance of the flight microprocessor. The concept of using an identical but non-flight qualified microprocessor and electronics ensemble linked to a virtual instrument with identical system software affords a relatively inexpensive simulation system capable of high fidelity.
NASA Astrophysics Data System (ADS)
Schafhirt, S.; Kaufer, D.; Cheng, P. W.
2014-12-01
In recent years many advanced load simulation tools, allowing an aero-servo-hydroelastic analyses of an entire offshore wind turbine, have been developed and verified. Nowadays, even an offshore wind turbine with a complex support structure such as a jacket can be analysed. However, the computational effort rises significantly with an increasing level of details. This counts especially for offshore wind turbines with lattice support structures, since those models do naturally have a higher number of nodes and elements than simpler monopile structures. During the design process multiple load simulations are demanded to obtain an optimal solution. In the view of pre-design tasks it is crucial to apply load simulations which keep the simulation quality and the computational effort in balance. The paper will introduce a reference wind turbine model consisting of the REpower5M wind turbine and a jacket support structure with a high level of detail. In total twelve variations of this reference model are derived and presented. Main focus is to simplify the models of the support structure and the foundation. The reference model and the simplified models are simulated with the coupled simulation tool Flex5-Poseidon and analysed regarding frequencies, fatigue loads, and ultimate loads. A model has been found which reaches an adequate increase of simulation speed while holding the results in an acceptable range compared to the reference results.
47 CFR 73.151 - Field strength measurements to establish performance of directional antennas.
Code of Federal Regulations, 2010 CFR
2010-10-01
... verified either by field strength measurement or by computer modeling and sampling system verification. (a... specifically identified by the Commission. (c) Computer modeling and sample system verification of modeled... performance verified by computer modeling and sample system verification. (1) A matrix of impedance...
NASA Technical Reports Server (NTRS)
Pavish, D. L.; Spaulding, M. L.
1977-01-01
A computer coded Lagrangian marker particle in Eulerian finite difference cell solution to the three dimensional incompressible mass transport equation, Water Advective Particle in Cell Technique, WAPIC, was developed, verified against analytic solutions, and subsequently applied in the prediction of long term transport of a suspended sediment cloud resulting from an instantaneous dredge spoil release. Numerical results from WAPIC were verified against analytic solutions to the three dimensional incompressible mass transport equation for turbulent diffusion and advection of Gaussian dye releases in unbounded uniform and uniformly sheared uni-directional flow, and for steady-uniform plug channel flow. WAPIC was utilized to simulate an analytic solution for non-equilibrium sediment dropout from an initially vertically uniform particle distribution in one dimensional turbulent channel flow.
A Study of Neutron Leakage in Finite Objects
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2015-01-01
A computationally efficient 3DHZETRN code capable of simulating High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for simple shielded objects. Monte Carlo (MC) benchmarks were used to verify the 3DHZETRN methodology in slab and spherical geometry, and it was shown that 3DHZETRN agrees with MC codes to the degree that various MC codes agree among themselves. One limitation in the verification process is that all of the codes (3DHZETRN and three MC codes) utilize different nuclear models/databases. In the present report, the new algorithm, with well-defined convergence criteria, is used to quantify the neutron leakage from simple geometries to provide means of verifying 3D effects and to provide guidance for further code development.
A Blueprint for Demonstrating Quantum Supremacy with Superconducting Qubits
NASA Technical Reports Server (NTRS)
Kechedzhi, Kostyantyn
2018-01-01
Long coherence times and high fidelity control recently achieved in scalable superconducting circuits paved the way for the growing number of experimental studies of many-qubit quantum coherent phenomena in these devices. Albeit full implementation of quantum error correction and fault tolerant quantum computation remains a challenge the near term pre-error correction devices could allow new fundamental experiments despite inevitable accumulation of errors. One such open question foundational for quantum computing is achieving the so called quantum supremacy, an experimental demonstration of a computational task that takes polynomial time on the quantum computer whereas the best classical algorithm would require exponential time and/or resources. It is possible to formulate such a task for a quantum computer consisting of less than a 100 qubits. The computational task we consider is to provide approximate samples from a non-trivial quantum distribution. This is a generalization for the case of superconducting circuits of ideas behind boson sampling protocol for quantum optics introduced by Arkhipov and Aaronson. In this presentation we discuss a proof-of-principle demonstration of such a sampling task on a 9-qubit chain of superconducting gmon qubits developed by Google. We discuss theoretical analysis of the driven evolution of the device resulting in output approximating samples from a uniform distribution in the Hilbert space, a quantum chaotic state. We analyze quantum chaotic characteristics of the output of the circuit and the time required to generate a sufficiently complex quantum distribution. We demonstrate that the classical simulation of the sampling output requires exponential resources by connecting the task of calculating the output amplitudes to the sign problem of the Quantum Monte Carlo method. We also discuss the detailed theoretical modeling required to achieve high fidelity control and calibration of the multi-qubit unitary evolution in the device. We use a novel cross-entropy statistical metric as a figure of merit to verify the output and calibrate the device controls. Finally, we demonstrate the statistics of the wave function amplitudes generated on the 9-gmon chain and verify the quantum chaotic nature of the generated quantum distribution. This verifies the implementation of the quantum supremacy protocol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, Peter; Varghese, Philip; Goldstein, David
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. Themore » method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.« less
Particle-based solid for nonsmooth multidomain dynamics
NASA Astrophysics Data System (ADS)
Nordberg, John; Servin, Martin
2018-04-01
A method for simulation of elastoplastic solids in multibody systems with nonsmooth and multidomain dynamics is developed. The solid is discretised into pseudo-particles using the meshfree moving least squares method for computing the strain tensor. The particle's strain and stress tensor variables are mapped to a compliant deformation constraint. The discretised solid model thus fit a unified framework for nonsmooth multidomain dynamics simulations including rigid multibodies with complex kinematic constraints such as articulation joints, unilateral contacts with dry friction, drivelines, and hydraulics. The nonsmooth formulation allows for impact impulses to propagate instantly between the rigid multibody and the solid. Plasticity is introduced through an associative perfectly plastic modified Drucker-Prager model. The elastic and plastic dynamics are verified for simple test systems, and the capability of simulating tracked terrain vehicles driving on a deformable terrain is demonstrated.
A review of the analytical simulation of aircraft crash dynamics
NASA Technical Reports Server (NTRS)
Fasanella, Edwin L.; Carden, Huey D.; Boitnott, Richard L.; Hayduk, Robert J.
1990-01-01
A large number of full scale tests of general aviation aircraft, helicopters, and one unique air-to-ground controlled impact of a transport aircraft were performed. Additionally, research was also conducted on seat dynamic performance, load-limiting seats, load limiting subfloor designs, and emergency-locator-transmitters (ELTs). Computer programs were developed to provide designers with methods for predicting accelerations, velocities, and displacements of collapsing structure and for estimating the human response to crash loads. The results of full scale aircraft and component tests were used to verify and guide the development of analytical simulation tools and to demonstrate impact load attenuating concepts. Analytical simulation of metal and composite aircraft crash dynamics are addressed. Finite element models are examined to determine their degree of corroboration by experimental data and to reveal deficiencies requiring further development.
Quantum proofs can be verified using only single-qubit measurements
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Nagaj, Daniel; Schuch, Norbert
2016-02-01
Quantum Merlin Arthur (QMA) is the class of problems which, though potentially hard to solve, have a quantum solution that can be verified efficiently using a quantum computer. It thus forms a natural quantum version of the classical complexity class NP (and its probabilistic variant MA, Merlin-Arthur games), where the verifier has only classical computational resources. In this paper, we study what happens when we restrict the quantum resources of the verifier to the bare minimum: individual measurements on single qubits received as they come, one by one. We find that despite this grave restriction, it is still possible to soundly verify any problem in QMA for the verifier with the minimum quantum resources possible, without using any quantum memory or multiqubit operations. We provide two independent proofs of this fact, based on measurement-based quantum computation and the local Hamiltonian problem. The former construction also applies to QMA1, i.e., QMA with one-sided error.
NASA Astrophysics Data System (ADS)
Wissing, Dennis Robert
The purpose of the this research was to explore undergraduates' conceptual development for oxygen transport and utilization, as a component of a cardiopulmonary physiology and advanced respiratory care course in the allied health program. This exploration focused on the student's development of knowledge and the presence of alternative conceptions, prior to, during, and after completing cardiopulmonary physiology and advanced respiratory care courses. Using the simulation program, SimBioSysTM (Samsel, 1994), student-participants completed a series of laboratory exercises focusing on cardiopulmonary disease states. This study examined data gathered from: (1) a novice group receiving the simulation program prior to instruction, (2) a novice group that experienced the simulation program following course completion in cardiopulmonary physiology, and (3) an intermediate group who experienced the simulation program following completion of formal education in Respiratory Care. This research was based on the theory of Human Constructivism as described by Mintzes, Wandersee, and Novak (1997). Data-gathering techniques were based on theories supported by Novak (1984), Wandersee (1997), and Chi (1997). Data were generated by exams, interviews, verbal analysis (Chi, 1997), and concept mapping. Results suggest that simulation may be an effective instructional method for assessing conceptual development and diagnosing alternative conceptions in undergraduates enrolled in a cardiopulmonary science program. Use of simulation in conjunction with clinical interview and concept mapping may assist in verifying gaps in learning and conceptual knowledge. This study found only limited evidence to support the use of computer simulation prior to lecture to augment learning. However, it was demonstrated that students' prelecture experience with the computer simulation helped the instructor assess what the learner knew so he or she could be taught accordingly. In addition, use of computer simulation after formal instruction was shown to be useful in aiding students identified by the instructor as needing remediation.
Investigations on 3-dimensional temperature distribution in a FLATCON-type CPV module
NASA Astrophysics Data System (ADS)
Wiesenfarth, Maike; Gamisch, Sebastian; Kraus, Harald; Bett, Andreas W.
2013-09-01
The thermal flow in a FLATCON®-type CPV module is investigated theoretically and experimentally. For the simulation a model in the computational fluid dynamics (CFD) software SolidWorks Flow Simulation was established. In order to verify the simulation results the calculated and measured temperatures were compared assuming the same operating conditions (wind speed and direction, direct normal irradiance (DNI) and ambient temperature). Therefore, an experimental module was manufactured and equipped with temperature sensors at defined positions. In addition, the temperature distribution on the back plate of the module was displayed by infrared images. The simulated absolute temperature and the distribution compare well with an average deviation of only 3.3 K to the sensor measurements. Finally, the validated model was used to investigate the influence of the back plate material on the temperature distribution by replacing the glass material by aluminum. The simulation showed that it is important to consider heat dissipation by radiation when designing a CPV module.
Scalable File Systems for High Performance Computing Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, S A
2007-10-03
Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-statemore » testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.« less
NASA Technical Reports Server (NTRS)
Rising, J. J.; Kairys, A. A.; Maass, C. A.; Siegart, C. D.; Rakness, W. L.; Mijares, R. D.; King, R. W.; Peterson, R. S.; Hurley, S. R.; Wickson, D.
1982-01-01
A limited authority pitch active control system (PACS) was developed for a wide body jet transport (L-1011) with a flying horizontal stabilizer. Two dual channel digital computers and the associated software provide command signals to a dual channel series servo which controls the stabilizer power actuators. Input sensor signals to the computer are pitch rate, column-trim position, and dynamic pressure. Control laws are given for the PACS and the system architecture is defined. The piloted flight simulation and vehicle system simulation tests performed to verify control laws and system operation prior to installation on the aircraft are discussed. Modifications to the basic aircraft are described. Flying qualities of the aircraft with the PACS on and off were evaluated. Handling qualities for cruise and high speed flight conditions with the c.g. at 39% mac ( + 1% stability margin) and PACS operating were judged to be as good as the handling qualities with the c.g. at 25% (+15% stability margin) and PACS off.
Particle-Size-Grouping Model of Precipitation Kinetics in Microalloyed Steels
NASA Astrophysics Data System (ADS)
Xu, Kun; Thomas, Brian G.
2012-03-01
The formation, growth, and size distribution of precipitates greatly affects the microstructure and properties of microalloyed steels. Computational particle-size-grouping (PSG) kinetic models based on population balances are developed to simulate precipitate particle growth resulting from collision and diffusion mechanisms. First, the generalized PSG method for collision is explained clearly and verified. Then, a new PSG method is proposed to model diffusion-controlled precipitate nucleation, growth, and coarsening with complete mass conservation and no fitting parameters. Compared with the original population-balance models, this PSG method saves significant computation and preserves enough accuracy to model a realistic range of particle sizes. Finally, the new PSG method is combined with an equilibrium phase fraction model for plain carbon steels and is applied to simulate the precipitated fraction of aluminum nitride and the size distribution of niobium carbide during isothermal aging processes. Good matches are found with experimental measurements, suggesting that the new PSG method offers a promising framework for the future development of realistic models of precipitation.
USC orthogonal multiprocessor for image processing with neural networks
NASA Astrophysics Data System (ADS)
Hwang, Kai; Panda, Dhabaleswar K.; Haddadi, Navid
1990-07-01
This paper presents the architectural features and imaging applications of the Orthogonal MultiProcessor (OMP) system, which is under construction at the University of Southern California with research funding from NSF and assistance from several industrial partners. The prototype OMP is being built with 16 Intel i860 RISC microprocessors and 256 parallel memory modules using custom-designed spanning buses, which are 2-D interleaved and orthogonally accessed without conflicts. The 16-processor OMP prototype is targeted to achieve 430 MIPS and 600 Mflops, which have been verified by simulation experiments based on the design parameters used. The prototype OMP machine will be initially applied for image processing, computer vision, and neural network simulation applications. We summarize important vision and imaging algorithms that can be restructured with neural network models. These algorithms can efficiently run on the OMP hardware with linear speedup. The ultimate goal is to develop a high-performance Visual Computer (Viscom) for integrated low- and high-level image processing and vision tasks.
Interaction between IGFBP7 and insulin: a theoretical and experimental study
NASA Astrophysics Data System (ADS)
Ruan, Wenjing; Kang, Zhengzhong; Li, Youzhao; Sun, Tianyang; Wang, Lipei; Liang, Lijun; Lai, Maode; Wu, Tao
2016-04-01
Insulin-like growth factor binding protein 7 (IGFBP7) can bind to insulin with high affinity which inhibits the early steps of insulin action. Lack of recognition mechanism impairs our understanding of insulin regulation before it binds to insulin receptor. Here we combine computational simulations with experimental methods to investigate the interaction between IGFBP7 and insulin. Molecular dynamics simulations indicated that His200 and Arg198 in IGFBP7 were key residues. Verified by experimental data, the interaction remained strong in single mutation systems R198E and H200F but became weak in double mutation system R198E-H200F relative to that in wild-type IGFBP7. The results and methods in present study could be adopted in future research of discovery of drugs by disrupting protein-protein interactions in insulin signaling. Nevertheless, the accuracy, reproducibility, and costs of free-energy calculation are still problems that need to be addressed before computational methods can become standard binding prediction tools in discovery pipelines.
Estimation of Local Bone Loads for the Volume of Interest.
Kim, Jung Jin; Kim, Youkyung; Jang, In Gwun
2016-07-01
Computational bone remodeling simulations have recently received significant attention with the aid of state-of-the-art high-resolution imaging modalities. They have been performed using localized finite element (FE) models rather than full FE models due to the excessive computational costs of full FE models. However, these localized bone remodeling simulations remain to be investigated in more depth. In particular, applying simplified loading conditions (e.g., uniform and unidirectional loads) to localized FE models have a severe limitation in a reliable subject-specific assessment. In order to effectively determine the physiological local bone loads for the volume of interest (VOI), this paper proposes a novel method of estimating the local loads when the global musculoskeletal loads are given. The proposed method is verified for the three VOI in a proximal femur in terms of force equilibrium, displacement field, and strain energy density (SED) distribution. The effect of the global load deviation on the local load estimation is also investigated by perturbing a hip joint contact force (HCF) in the femoral head. Deviation in force magnitude exhibits the greatest absolute changes in a SED distribution due to its own greatest deviation, whereas angular deviation perpendicular to a HCF provides the greatest relative change. With further in vivo force measurements and high-resolution clinical imaging modalities, the proposed method will contribute to the development of reliable patient-specific localized FE models, which can provide enhanced computational efficiency for iterative computing processes such as bone remodeling simulations.
Tena, Ana F; Fernández, Joaquín; Álvarez, Eduardo; Casan, Pere; Walters, D Keith
2017-06-01
The need for a better understanding of pulmonary diseases has led to increased interest in the development of realistic computational models of the human lung. To minimize computational cost, a reduced geometry model is used for a model lung airway geometry up to generation 16. Truncated airway branches require physiologically realistic boundary conditions to accurately represent the effect of the removed airway sections. A user-defined function has been developed, which applies velocities mapped from similar locations in fully resolved airway sections. The methodology can be applied in any general purpose computational fluid dynamics code, with the only limitation that the lung model must be symmetrical in each truncated branch. Unsteady simulations have been performed to verify the operation of the model. The test case simulates a spirometry because the lung is obliged to rapidly perform both inspiration and expiration. Once the simulation was completed, the obtained pressure in the lower level of the lung was used as a boundary condition. The output velocity, which is a numerical spirometry, was compared with the experimental spirometry for validation purposes. This model can be applied for a wide range of patient-specific resolution levels. If the upper airway generations have been constructed from a computed tomography scan, it would be possible to quickly obtain a complete reconstruction of the lung specific to a specific person, which would allow individualized therapies. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Kim, S. C.; Hayter, E. J.; Pruhs, R.; Luong, P.; Lackey, T. C.
2016-12-01
The geophysical scale circulation of the Mid Atlantic Bight and hydrologic inputs from adjacent Chesapeake Bay watersheds and tributaries influences the hydrodynamics and transport of the James River estuary. Both barotropic and baroclinic transport govern the hydrodynamics of this partially stratified estuary. Modeling the placement of dredged sediment requires accommodating this wide spectrum of atmospheric and hydrodynamic scales. The Geophysical Scale Multi-Block (GSMB) Transport Modeling System is a collection of multiple well established and USACE approved process models. Taking advantage of the parallel computing capability of multi-block modeling, we performed one year three-dimensional modeling of hydrodynamics in supporting simulation of dredged sediment placements transport and morphology changes. Model forcing includes spatially and temporally varying meteorological conditions and hydrological inputs from the watershed. Surface heat flux estimates were derived from the National Solar Radiation Database (NSRDB). The open water boundary condition for water level was obtained from an ADCIRC model application of the U. S. East Coast. Temperature-salinity boundary conditions were obtained from the Environmental Protection Agency (EPA) Chesapeake Bay Program (CBP) long-term monitoring stations database. Simulated water levels were calibrated and verified by comparison with National Oceanic and Atmospheric Administration (NOAA) tide gage locations. A harmonic analysis of the modeled tides was performed and compared with NOAA tide prediction data. In addition, project specific circulation was verified using US Army Corps of Engineers (USACE) drogue data. Salinity and temperature transport was verified at seven CBP long term monitoring stations along the navigation channel. Simulation and analysis of model results suggest that GSMB is capable of resolving the long duration, multi-scale processes inherent to practical engineering problems such as dredged material placement stability.
A full-wave Helmholtz model for continuous-wave ultrasound transmission.
Huttunen, Tomi; Malinen, Matti; Kaipio, Jari P; White, Phillip Jason; Hynynen, Kullervo
2005-03-01
A full-wave Helmholtz model of continuous-wave (CW) ultrasound fields may offer several attractive features over widely used partial-wave approximations. For example, many full-wave techniques can be easily adjusted for complex geometries, and multiple reflections of sound are automatically taken into account in the model. To date, however, the full-wave modeling of CW fields in general 3D geometries has been avoided due to the large computational cost associated with the numerical approximation of the Helmholtz equation. Recent developments in computing capacity together with improvements in finite element type modeling techniques are making possible wave simulations in 3D geometries which reach over tens of wavelengths. The aim of this study is to investigate the feasibility of a full-wave solution of the 3D Helmholtz equation for modeling of continuous-wave ultrasound fields in an inhomogeneous medium. The numerical approximation of the Helmholtz equation is computed using the ultraweak variational formulation (UWVF) method. In addition, an inverse problem technique is utilized to reconstruct the velocity distribution on the transducer which is used to model the sound source in the UWVF scheme. The modeling method is verified by comparing simulated and measured fields in the case of transmission of 531 kHz CW fields through layered plastic plates. The comparison shows a reasonable agreement between simulations and measurements at low angles of incidence but, due to mode conversion, the Helmholtz model becomes insufficient for simulating ultrasound fields in plates at large angles of incidence.
Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo
2015-09-18
A content-matched (CM) rangemonitoring query overmoving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CMrange monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods.
The Five Key Questions of Human Performance Modeling.
Wu, Changxu
2018-01-01
Via building computational (typically mathematical and computer simulation) models, human performance modeling (HPM) quantifies, predicts, and maximizes human performance, human-machine system productivity and safety. This paper describes and summarizes the five key questions of human performance modeling: 1) Why we build models of human performance; 2) What the expectations of a good human performance model are; 3) What the procedures and requirements in building and verifying a human performance model are; 4) How we integrate a human performance model with system design; and 5) What the possible future directions of human performance modeling research are. Recent and classic HPM findings are addressed in the five questions to provide new thinking in HPM's motivations, expectations, procedures, system integration and future directions.
Test and evaluation of the HIDEC engine uptrim algorithm
NASA Technical Reports Server (NTRS)
Ray, R. J.; Myers, L. P.
1986-01-01
The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrated engine-airframe control systems. Performance improvements will result from an adaptive engine stall margin mode, a highly integrated mode that uses the airplane flight conditions and the resulting inlet distortion to continuously compute engine stall margin. When there is excessive stall margin, the engine is uptrimmed for more thrust by increasing engine pressure ratio (EPR). The EPR uptrim logic has been evaluated and implemented into computer simulations. Thrust improvements over 10 percent are predicted for subsonic flight conditions. The EPR uptrim was successfully demonstrated during engine ground tests. Test results verify model predictions at the conditions tested.
Removing the Impact of Baluns from Measurements of a Novel Antenna for Cosmological HI Measurements
NASA Astrophysics Data System (ADS)
Trung, Vincent; Ewall-Wice, Aaron Michael; Li, Jianshu; Hewitt, Jacqueline; Riley, Daniel; Bradley, Richard F.; Makhija, Krishna; Garza, Sierra; HERA Collaboration
2018-01-01
The Hydrogen Epoch of Reionization Array (HERA) is a low-frequency radio interferometer aiming to detect redshifted 21 cm emission from neutral hydrogen during the Epoch of Reionization at frequencies between 100 and 200 MHz. Extending HERA’s performance to lower frequencies will enable detection of radio waves at higher redshifts, when models predict that gas between galaxies was heated by X-rays from the first stellar-mass black holes. The isolation of foregrounds that are four orders of magnitude brighter than the faint cosmological signal presents and unprecedented set of design specifications for our antennas, including sensitivity and spectral smoothness over a large bandwidth. We are developing a broadband sinuous antenna feed for HERA, extending the bandwidth from 50 to 220 MHz, and we are verifying antenna performance with field measurements and simulations. Electromagnetic simulations compute the differential S-parameters of the antenna. We measure these S-parameters through a lossy balun attached to an unbalanced vector network analyzer. Removing the impact of this balun is critical in obtaining an accurate comparison between our simulations and measurements. I describe measurements to characterize the baluns and how they are used to remove the balun’s impact on the antenna S-parameter measurements. Field measurements of the broadband sinuous antenna dish at MIT and Green Bank Observatory are used to verify our electromagnetic simulations of the broadband sinuous antenna design. After applying our balun corrections, we find that our field measurements are in good agreement with the simulation, giving us confidence that our feeds will perform as designed.
Nonequilibrium radiative hypersonic flow simulation
NASA Astrophysics Data System (ADS)
Shang, J. S.; Surzhikov, S. T.
2012-08-01
Nearly all the required scientific disciplines for computational hypersonic flow simulation have been developed on the framework of gas kinetic theory. However when high-temperature physical phenomena occur beneath the molecular and atomic scales, the knowledge of quantum physics and quantum chemical-physics becomes essential. Therefore the most challenging topics in computational simulation probably can be identified as the chemical-physical models for a high-temperature gaseous medium. The thermal radiation is also associated with quantum transitions of molecular and electronic states. The radiative energy exchange is characterized by the mechanisms of emission, absorption, and scattering. In developing a simulation capability for nonequilibrium radiation, an efficient numerical procedure is equally important both for solving the radiative transfer equation and for generating the required optical data via the ab-initio approach. In computational simulation, the initial values and boundary conditions are paramount for physical fidelity. Precise information at the material interface of ablating environment requires more than just a balance of the fluxes across the interface but must also consider the boundary deformation. The foundation of this theoretic development shall be built on the eigenvalue structure of the governing equations which can be described by Reynolds' transport theorem. Recent innovations for possible aerospace vehicle performance enhancement via an electromagnetic effect appear to be very attractive. The effectiveness of this mechanism is dependent strongly on the degree of ionization of the flow medium, the consecutive interactions of fluid dynamics and electrodynamics, as well as an externally applied magnetic field. Some verified research results in this area will be highlighted. An assessment of all these most recent advancements in nonequilibrium modeling of chemical kinetics, chemical-physics kinetics, ablation, radiative exchange, computational algorithms, and the aerodynamic-electromagnetic interaction are summarized and delineated. The critical basic research areas for physic-based hypersonic flow simulation should become self-evident through the present discussion. Nevertheless intensive basic research efforts must be sustained in these areas for fundamental knowledge and future technology advancement.
An interactive control algorithm used for equilateral triangle formation with robotic sensors.
Li, Xiang; Chen, Hongcai
2014-04-22
This paper describes an interactive control algorithm, called Triangle Formation Algorithm (TFA), used for three neighboring robotic sensors which are distributed randomly to self-organize into and equilateral triangle (E) formation. The algorithm is proposed based on the triangular geometry and considering the actual sensors used in robotics. In particular, the stability of the TFA, which can be executed by robotic sensors independently and asynchronously for E formation, is analyzed in details based on Lyapunov stability theory. Computer simulations are carried out for verifying the effectiveness of the TFA. The analytical results and simulation studies indicate that three neighboring robots employing conventional sensors can self-organize into E formations successfully regardless of their initial distribution using the same TFAs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K.; Petersson, N. A.; Rodgers, A.
Acoustic waveform modeling is a computationally intensive task and full three-dimensional simulations are often impractical for some geophysical applications such as long-range wave propagation and high-frequency sound simulation. In this study, we develop a two-dimensional high-order accurate finite-difference code for acoustic wave modeling. We solve the linearized Euler equations by discretizing them with the sixth order accurate finite difference stencils away from the boundary and the third order summation-by-parts (SBP) closure near the boundary. Non-planar topographic boundary is resolved by formulating the governing equation in curvilinear coordinates following the interface. We verify the implementation of the algorithm by numerical examplesmore » and demonstrate the capability of the proposed method for practical acoustic wave propagation problems in the atmosphere.« less
An Interactive Control Algorithm Used for Equilateral Triangle Formation with Robotic Sensors
Li, Xiang; Chen, Hongcai
2014-01-01
This paper describes an interactive control algorithm, called Triangle Formation Algorithm (TFA), used for three neighboring robotic sensors which are distributed randomly to self-organize into and equilateral triangle (E) formation. The algorithm is proposed based on the triangular geometry and considering the actual sensors used in robotics. In particular, the stability of the TFA, which can be executed by robotic sensors independently and asynchronously for E formation, is analyzed in details based on Lyapunov stability theory. Computer simulations are carried out for verifying the effectiveness of the TFA. The analytical results and simulation studies indicate that three neighboring robots employing conventional sensors can self-organize into E formations successfully regardless of their initial distribution using the same TFAs. PMID:24759118
Model Predictions and Observed Performance of JWST's Cryogenic Position Metrology System
NASA Technical Reports Server (NTRS)
Lunt, Sharon R.; Rhodes, David; DiAntonio, Andrew; Boland, John; Wells, Conrad; Gigliotti, Trevis; Johanning, Gary
2016-01-01
The James Webb Space Telescope cryogenic testing requires measurement systems that both obtain a very high degree of accuracy and can function in that environment. Close-range photogrammetry was identified as meeting those criteria. Testing the capability of a close-range photogrammetric system prior to its existence is a challenging problem. Computer simulation was chosen over building a scaled mock-up to allow for increased flexibility in testing various configurations. Extensive validation work was done to ensure that the actual as-built system meet accuracy and repeatability requirements. The simulated image data predicted the uncertainty in measurement to be within specification and this prediction was borne out experimentally. Uncertainty at all levels was verified experimentally to be less than 0.1 millimeters.
Atomization simulations using an Eulerian-VOF-Lagrangian method
NASA Technical Reports Server (NTRS)
Chen, Yen-Sen; Shang, Huan-Min; Liaw, Paul; Chen, C. P.
1994-01-01
This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservations are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present innovative approach by simulating benchmark problems including the coaxial jet atomization.
NASA Technical Reports Server (NTRS)
Fines, P.; Aghvami, A. H.
1990-01-01
The performance of a low bit rate (64 Kb/s) all digital 16-ary Differentially Encoded Quadrature Amplitude Modulation (16-DEQAM) demodulator operating over a mobile satellite channel, is considered. The synchronization and detection techniques employed to overcome the Rician channel impairments, are described. The acquisition and steady state performance of this modem, are evaluated by computer simulation over AWGN and RICIAN channels. The results verify the suitability of the 16-DEQAM transmission over slowly faded and/or mildly faded channels.
Manufacturing stresses and strains in filament wound cylinders
NASA Technical Reports Server (NTRS)
Calius, E. P.; Kidron, M.; Lee, S. Y.; Springer, G. S.
1988-01-01
Tests were performed to verify a previously developed model for simulating the manufacturing process of filament wound cylinders. The axial and hoop strains were measured during cure inside a filament wound Fiberite T300/976 graphite-epoxy cylinder. The measured strains were compared to those computed by the model. Good agreements were found between the data and the model, indicating that the model is a useful representation of the process. For the conditions of the test, the manufacturing stresses inside the cylinder were also calculated using the model.
State trajectories used to observe and control dc-to-dc converters
NASA Technical Reports Server (NTRS)
Burns, W. W., III; Wilson, T. G.
1976-01-01
State-plane analysis techniques are employed to study the voltage stepup energy-storage dc-to-dc converter. Within this framework, an example converter operating under the influence of a constant on-time and a constant frequency controller is examined. Qualitative insight gained through this approach is used to develop a conceptual free-running control law for the voltage stepup converter which can achieve steady-state operation in one on/off cycle of control. Digital computer simulation data are presented to illustrate and verify the theoretical discussions presented.
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.; Rajagopal, R.
2014-12-01
Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.
Suzuki, Yuma; Shimizu, Tetsuhide; Yang, Ming
2017-01-01
The quantitative evaluation of the biomolecules transport with multi-physics in nano/micro scale is demanded in order to optimize the design of microfluidics device for the biomolecules detection with high detection sensitivity and rapid diagnosis. This paper aimed to investigate the effectivity of the computational simulation using the numerical model of the biomolecules transport with multi-physics near a microchannel surface on the development of biomolecules-detection devices. The biomolecules transport with fluid drag force, electric double layer (EDL) force, and van der Waals force was modeled by Newtonian Equation of motion. The model validity was verified in the influence of ion strength and flow velocity on biomolecules distribution near the surface compared with experimental results of previous studies. The influence of acting forces on its distribution near the surface was investigated by the simulation. The trend of its distribution to ion strength and flow velocity was agreement with the experimental result by the combination of all acting forces. Furthermore, EDL force dominantly influenced its distribution near its surface compared with fluid drag force except for the case of high velocity and low ion strength. The knowledges from the simulation might be useful for the design of biomolecules-detection devices and the simulation can be expected to be applied on its development as the design tool for high detection sensitivity and rapid diagnosis in the future.
Development of a high resolution voxelised head phantom for medical physics applications.
Giacometti, V; Guatelli, S; Bazalova-Carter, M; Rosenfeld, A B; Schulte, R W
2017-01-01
Computational anthropomorphic phantoms have become an important investigation tool for medical imaging and dosimetry for radiotherapy and radiation protection. The development of computational phantoms with realistic anatomical features contribute significantly to the development of novel methods in medical physics. For many applications, it is desirable that such computational phantoms have a real-world physical counterpart in order to verify the obtained results. In this work, we report the development of a voxelised phantom, the HIGH_RES_HEAD, modelling a paediatric head based on the commercial phantom 715-HN (CIRS). HIGH_RES_HEAD is unique for its anatomical details and high spatial resolution (0.18×0.18mm 2 pixel size). The development of such a phantom was required to investigate the performance of a new proton computed tomography (pCT) system, in terms of detector technology and image reconstruction algorithms. The HIGH_RES_HEAD was used in an ad-hoc Geant4 simulation modelling the pCT system. The simulation application was previously validated with respect to experimental results. When compared to a standard spatial resolution voxelised phantom of the same paediatric head, it was shown that in pCT reconstruction studies, the use of the HIGH_RES_HEAD translates into a reduction from 2% to 0.7% of the average relative stopping power difference between experimental and simulated results thus improving the overall quality of the head phantom simulation. The HIGH_RES_HEAD can also be used for other medical physics applications such as treatment planning studies. A second version of the voxelised phantom was created that contains a prototypic base of skull tumour and surrounding organs at risk. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Passos, Ricardo Gomes; von Sperling, Marcos; Ribeiro, Thiago Bressani
2014-01-01
Knowledge of the hydraulic behaviour is very important in the characterization of a stabilization pond, since pond hydrodynamics plays a fundamental role in treatment efficiency. An advanced hydrodynamics characterization may be achieved by carrying out measurements with tracers, dyes and drogues or using mathematical simulation employing computational fluid dynamics (CFD). The current study involved experimental determinations and mathematical simulations of a full-scale facultative pond in Brazil. A 3D CFD model showed major flow lines, degree of dispersion, dead zones and short circuit regions in the pond. Drogue tracking, wind measurements and dye dispersion were also used in order to obtain information about the actual flow in the pond and as a means of assessing the performance of the CFD model. The drogue, designed and built as part of this research, and which included a geographical positioning system (GPS), presented very satisfactory results. The CFD modelling has proven to be very useful in the evaluation of the hydrodynamic conditions of the facultative pond. A virtual tracer test allowed an estimation of the real mean hydraulic retention time and mixing conditions in the pond. The computational model in CFD corresponded well to what was verified in the field.
NASA Astrophysics Data System (ADS)
McClure, J. E.; Prins, J. F.; Miller, C. T.
2014-07-01
Multiphase flow implementations of the lattice Boltzmann method (LBM) are widely applied to the study of porous medium systems. In this work, we construct a new variant of the popular "color" LBM for two-phase flow in which a three-dimensional, 19-velocity (D3Q19) lattice is used to compute the momentum transport solution while a three-dimensional, seven velocity (D3Q7) lattice is used to compute the mass transport solution. Based on this formulation, we implement a novel heterogeneous GPU-accelerated algorithm in which the mass transport solution is computed by multiple shared memory CPU cores programmed using OpenMP while a concurrent solution of the momentum transport is performed using a GPU. The heterogeneous solution is demonstrated to provide speedup of 2.6 × as compared to multi-core CPU solution and 1.8 × compared to GPU solution due to concurrent utilization of both CPU and GPU bandwidths. Furthermore, we verify that the proposed formulation provides an accurate physical representation of multiphase flow processes and demonstrate that the approach can be applied to perform heterogeneous simulations of two-phase flow in porous media using a typical GPU-accelerated workstation.
A Performance Prediction Model for a Fault-Tolerant Computer During Recovery and Restoration
NASA Technical Reports Server (NTRS)
Obando, Rodrigo A.; Stoughton, John W.
1995-01-01
The modeling and design of a fault-tolerant multiprocessor system is addressed. Of interest is the behavior of the system during recovery and restoration after a fault has occurred. The multiprocessor systems are based on the Algorithm to Architecture Mapping Model (ATAMM) and the fault considered is the death of a processor. The developed model is useful in the determination of performance bounds of the system during recovery and restoration. The performance bounds include time to recover from the fault, time to restore the system, and determination of any permanent delay in the input to output latency after the system has regained steady state. Implementation of an ATAMM based computer was developed for a four-processor generic VHSIC spaceborne computer (GVSC) as the target system. A simulation of the GVSC was also written on the code used in the ATAMM Multicomputer Operating System (AMOS). The simulation is used to verify the new model for tracking the propagation of the delay through the system and predicting the behavior of the transient state of recovery and restoration. The model is shown to accurately predict the transient behavior of an ATAMM based multicomputer during recovery and restoration.
NASA Astrophysics Data System (ADS)
Schrooyen, Pierre; Chatelain, Philippe; Hillewaert, Koen; Magin, Thierry E.
2014-11-01
The atmospheric entry of spacecraft presents several challenges in simulating the aerothermal flow around the heat shield. Predicting an accurate heat-flux is a complex task, especially regarding the interaction between the flow in the free stream and the erosion of the thermal protection material. To capture this interaction, a continuum approach is developed to go progressively from the region fully occupied by fluid to a receding porous medium. The volume averaged Navier-Stokes equations are used to model both phases in the same computational domain considering a single set of conservation laws. The porosity is itself a variable of the computation, allowing to take volumetric ablation into account through adequate source terms. This approach is implemented within a computational tool based on a high-order discontinuous Galerkin discretization. The multi-dimensional tool has already been validated and has proven its efficient parallel implementation. Within this platform, a fully implicit method was developed to simulate multi-phase reacting flows. Numerical results to verify and validate the methodology are considered within this work. Interactions between the flow and the ablated geometry are also presented. Supported by Fund for Research Training in Industry and Agriculture.
Integrated control and health management. Orbit transfer rocket engine technology program
NASA Technical Reports Server (NTRS)
Holzmann, Wilfried A.; Hayden, Warren R.
1988-01-01
To insure controllability of the baseline design for a 7500 pound thrust, 10:1 throttleable, dual expanded cycle, Hydrogen-Oxygen, orbit transfer rocket engine, an Integrated Controls and Health Monitoring concept was developed. This included: (1) Dynamic engine simulations using a TUTSIM derived computer code; (2) analysis of various control methods; (3) Failure Modes Analysis to identify critical sensors; (4) Survey of applicable sensors technology; and, (5) Study of Health Monitoring philosophies. The engine design was found to be controllable over the full throttling range by using 13 valves, including an oxygen turbine bypass valve to control mixture ratio, and a hydrogen turbine bypass valve, used in conjunction with the oxygen bypass to control thrust. Classic feedback control methods are proposed along with specific requirements for valves, sensors, and the controller. Expanding on the control system, a Health Monitoring system is proposed including suggested computing methods and the following recommended sensors: (1) Fiber optic and silicon bearing deflectometers; (2) Capacitive shaft displacement sensors; and (3) Hot spot thermocouple arrays. Further work is needed to refine and verify the dynamic simulations and control algorithms, to advance sensor capabilities, and to develop the Health Monitoring computational methods.
Mathematical models for space shuttle ground systems
NASA Technical Reports Server (NTRS)
Tory, E. G.
1985-01-01
Math models are a series of algorithms, comprised of algebraic equations and Boolean Logic. At Kennedy Space Center, math models for the Space Shuttle Systems are performed utilizing the Honeywell 66/80 digital computers, Modcomp II/45 Minicomputers and special purpose hardware simulators (MicroComputers). The Shuttle Ground Operations Simulator operating system provides the language formats, subroutines, queueing schemes, execution modes and support software to write, maintain and execute the models. The ground systems presented consist primarily of the Liquid Oxygen and Liquid Hydrogen Cryogenic Propellant Systems, as well as liquid oxygen External Tank Gaseous Oxygen Vent Hood/Arm and the Vehicle Assembly Building (VAB) High Bay Cells. The purpose of math modeling is to simulate the ground hardware systems and to provide an environment for testing in a benign mode. This capability allows the engineers to check out application software for loading and launching the vehicle, and to verify the Checkout, Control, & Monitor Subsystem within the Launch Processing System. It is also used to train operators and to predict system response and status in various configurations (normal operations, emergency and contingent operations), including untried configurations or those too dangerous to try under real conditions, i.e., failure modes.
Global simulation of the Czochralski silicon crystal growth in ANSYS FLUENT
NASA Astrophysics Data System (ADS)
Kirpo, Maksims
2013-05-01
Silicon crystals for high efficiency solar cells are produced mainly by the Czochralski (CZ) crystal growth method. Computer simulations of the CZ process established themselves as a basic tool for optimization of the growth process which allows to reduce production costs keeping high quality of the crystalline material. The author shows the application of the general Computational Fluid Dynamics (CFD) code ANSYS FLUENT to solution of the static two-dimensional (2D) axisymmetric global model of the small industrial furnace for growing of silicon crystals with a diameter of 100 mm. The presented numerical model is self-sufficient and incorporates the most important physical phenomena of the CZ growth process including latent heat generation during crystallization, crystal-melt interface deflection, turbulent heat and mass transport, oxygen transport, etc. The demonstrated approach allows to find the heater power for the specified pulling rate of the crystal but the obtained power values are smaller than those found in the literature for the studied furnace. However, the described approach is successfully verified with the respect to the heater power by its application for the numerical simulations of the real CZ pullers by "Bosch Solar Energy AG".
A new computational growth model for sea urchin skeletons.
Zachos, Louis G
2009-08-07
A new computational model has been developed to simulate growth of regular sea urchin skeletons. The model incorporates the processes of plate addition and individual plate growth into a composite model of whole-body (somatic) growth. A simple developmental model based on hypothetical morphogens underlies the assumptions used to define the simulated growth processes. The data model is based on a Delaunay triangulation of plate growth center points, using the dual Voronoi polygons to define plate topologies. A spherical frame of reference is used for growth calculations, with affine deformation of the sphere (based on a Young-Laplace membrane model) to result in an urchin-like three-dimensional form. The model verifies that the patterns of coronal plates in general meet the criteria of Voronoi polygonalization, that a morphogen/threshold inhibition model for plate addition results in the alternating plate addition pattern characteristic of sea urchins, and that application of the Bertalanffy growth model to individual plates results in simulated somatic growth that approximates that seen in living urchins. The model suggests avenues of research that could explain some of the distinctions between modern sea urchins and the much more disparate groups of forms that characterized the Paleozoic Era.
Finite element simulation of the mechanical impact of computer work on the carpal tunnel syndrome.
Mouzakis, Dionysios E; Rachiotis, George; Zaoutsos, Stefanos; Eleftheriou, Andreas; Malizos, Konstantinos N
2014-09-22
Carpal tunnel syndrome (CTS) is a clinical disorder resulting from the compression of the median nerve. The available evidence regarding the association between computer use and CTS is controversial. There is some evidence that computer mouse or keyboard work, or both are associated with the development of CTS. Despite the availability of pressure measurements in the carpal tunnel during computer work (exposure to keyboard or mouse) there are no available data to support a direct effect of the increased intracarpal canal pressure on the median nerve. This study presents an attempt to simulate the direct effects of computer work on the whole carpal area section using finite element analysis. A finite element mesh was produced from computerized tomography scans of the carpal area, involving all tissues present in the carpal tunnel. Two loading scenarios were applied on these models based on biomechanical data measured during computer work. It was found that mouse work can produce large deformation fields on the median nerve region. Also, the high stressing effect of the carpal ligament was verified. Keyboard work produced considerable and heterogeneous elongations along the longitudinal axis of the median nerve. Our study provides evidence that increased intracarpal canal pressures caused by awkward wrist postures imposed during computer work were associated directly with deformation of the median nerve. Despite the limitations of the present study the findings could be considered as a contribution to the understanding of the development of CTS due to exposure to computer work. Copyright © 2014 Elsevier Ltd. All rights reserved.
An IBM 370 assembly language program verifier
NASA Technical Reports Server (NTRS)
Maurer, W. D.
1977-01-01
The paper describes a program written in SNOBOL which verifies the correctness of programs written in assembly language for the IBM 360 and 370 series of computers. The motivation for using assembly language as a source language for a program verifier was the realization that many errors in programs are caused by misunderstanding or ignorance of the characteristics of specific computers. The proof of correctness of a program written in assembly language must take these characteristics into account. The program has been compiled and is currently running at the Center for Academic and Administrative Computing of The George Washington University.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.
2016-01-01
Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We therefore conclude that customization parameters must be set with reference to the optimized parameters of the corresponding irradiation technique in order to render them useful for achieving artifact-free MC simulation for use in computational experiments and clinical treatments.
Multispectral computational ghost imaging with multiplexed illumination
NASA Astrophysics Data System (ADS)
Huang, Jian; Shi, Dongfeng
2017-07-01
Computational ghost imaging has attracted wide attention from researchers in many fields over the last two decades. Multispectral imaging as one application of computational ghost imaging possesses spatial and spectral resolving abilities, and is very useful for surveying scenes and extracting detailed information. Existing multispectral imagers mostly utilize narrow band filters or dispersive optical devices to separate light of different wavelengths, and then use multiple bucket detectors or an array detector to record them separately. Here, we propose a novel multispectral ghost imaging method that uses one single bucket detector with multiplexed illumination to produce a colored image. The multiplexed illumination patterns are produced by three binary encoded matrices (corresponding to the red, green and blue colored information, respectively) and random patterns. The results of the simulation and experiment have verified that our method can be effective in recovering the colored object. Multispectral images are produced simultaneously by one single-pixel detector, which significantly reduces the amount of data acquisition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sacks, H.K.; Novak, T.
2008-03-15
During the past decade, several methane/air explosions in abandoned or sealed areas of underground coal mines have been attributed to lightning. Previously published work by the authors showed, through computer simulations, that currents from lightning could propagate down steel-cased boreholes and ignite explosive methane/air mixtures. The presented work expands on the model and describes a methodology based on IEEE Standard 1410-2004 to estimate the probability of an ignition. The methodology provides a means to better estimate the likelihood that an ignition could occur underground and, more importantly, allows the calculation of what-if scenarios to investigate the effectiveness of engineering controlsmore » to reduce the hazard. The computer software used for calculating fields and potentials is also verified by comparing computed results with an independently developed theoretical model of electromagnetic field propagation through a conductive medium.« less
Computational and Experimental Unsteady Pressures for Alternate SLS Booster Nose Shapes
NASA Technical Reports Server (NTRS)
Braukmann, Gregory J.; Streett, Craig L.; Kleb, William L.; Alter, Stephen J.; Murphy, Kelly J.; Glass, Christopher E.
2015-01-01
Delayed Detached Eddy Simulation (DDES) predictions of the unsteady transonic flow about a Space Launch System (SLS) configuration were made with the Fully UNstructured Three-Dimensional (FUN3D) flow solver. The computational predictions were validated against results from a 2.5% model tested in the NASA Ames 11-Foot Transonic Unitary Plan Facility. The peak C(sub p,rms) value was under-predicted for the baseline, Mach 0.9 case, but the general trends of high C(sub p,rms) levels behind the forward attach hardware, reducing as one moves away both streamwise and circumferentially, were captured. Frequency of the peak power in power spectral density estimates was consistently under-predicted. Five alternate booster nose shapes were assessed, and several were shown to reduce the surface pressure fluctuations, both as predicted by the computations and verified by the wind tunnel results.
Verification of the GIS-based Newmark method through 2D dynamic modelling of slope stability
NASA Astrophysics Data System (ADS)
Torgoev, A.; Havenith, H.-B.
2012-04-01
The goal of this work is to verify the simplified GIS-based Newmark displacement approach through 2D dynamic modelling of slope stability. The research is applied to a landslide-prone area in Central Asia, the Mailuu-Suu Valley, situated in the south of Kyrgyzstan. The comparison is carried out on the basis of 30 different profiles located in the target area, presenting different geological, tectonic and morphological settings. One part of the profiles were selected within landslide zones, the other part was selected in stable areas. Many of the landslides are complex slope failures involving falls, rotational sliding and/or planar sliding and flows. These input data were extracted from a 3D structural geological model built with the GOCAD software. Geophysical and geomechanical parameters were defined on the basis of results obtained by multiple surveys performed in the area over the past 15 years. These include geophysical investigation, seismological experiments and ambient noise measurements. Dynamic modelling of slope stability is performed with the UDEC version 4.01 software that is able to compute deformation of discrete elements. Inside these elements both elasto-plastic and purely elastic materials (similar to rigid blocks) were tested. Various parameter variations were tested to assess their influence on the final outputs. And even though no groundwater flow was included, the numerous simulations are very time-consuming (20 mins per model for 10 secs simulated shaking) - about 500 computation hours have been completed so far (more than 100 models). Preliminary results allow us to compare Newmark displacements computed using different GIS approaches (Jibson et al., 1998; Miles and Ho, 1999, among others) with the displacements computed using the original Newmark method (Newmark, 1965, here simulated seismograms were used) and displacements produced along joints by the corresponding 2D dynamical models. The generation of seismic amplification and its impact on peak-ground-acceleration, Arias Intensity and permanent slope movements (total and slip on joints) is assessed for numerous morphological-lithological settings (curvature, slope angle, surficial geology, various layer dips and orientations) throughout the target area. The final results of our studies should allow us to define the limitations of the simplified GIS-based Newmark displacement modelling; thus, the verified method would make landslide susceptibility and hazard mapping in seismically active regions more reliable.
NASA Astrophysics Data System (ADS)
Ho, Teck Seng; Charles, Christine; Boswell, Roderick W.
2016-12-01
This paper presents computational fluid dynamics simulations of the cold gas operation of Pocket Rocket and Mini Pocket Rocket radiofrequency electrothermal microthrusters, replicating experiments performed in both sub-Torr and vacuum environments. This work takes advantage of flow velocity choking to circumvent the invalidity of modelling vacuum regions within a CFD simulation, while still preserving the accuracy of the desired results in the internal regions of the microthrusters. Simulated results of the plenum stagnation pressure is in precise agreement with experimental measurements when slip boundary conditions with the correct tangential momentum accommodation coefficients for each gas are used. Thrust and specific impulse is calculated by integrating the flow profiles at the exit of the microthrusters, and are in good agreement with experimental pendulum thrust balance measurements and theoretical expectations. For low thrust conditions where experimental instruments are not sufficiently sensitive, these cold gas simulations provide additional data points against which experimental results can be verified and extrapolated. The cold gas simulations presented in this paper will be used as a benchmark to compare with future plasma simulations of the Pocket Rocket microthruster.
Development of the functional simulator for the Galileo attitude and articulation control system
NASA Technical Reports Server (NTRS)
Namiri, M. K.
1983-01-01
A simulation program for verifying and checking the performance of the Galileo Spacecraft's Attitude and Articulation Control Subsystem's (AACS) flight software is discussed. The program, which is called Functional Simulator (FUNSIM), provides a simple method of interfacing user-supplied mathematical models coded in FORTRAN which describes spacecraft dynamics, sensors, and actuators; this is done with the AACS flight software, coded in HAL/S (High-level Advanced Language/Shuttle). It is thus able to simulate the AACS flight software accurately to the HAL/S statement level in the environment of a mainframe computer system. FUNSIM also has a command and data subsystem (CDS) simulator. It is noted that the input/output data and timing are simulated with the same precision as the flight microprocessor. FUNSIM uses a variable stepsize numerical integration algorithm complete with individual error bound control on the state variable to solve the equations of motion. The program has been designed to provide both line printer and matrix dot plotting of the variables requested in the run section and to provide error diagnostics.
High effective inverse dynamics modelling for dual-arm robot
NASA Astrophysics Data System (ADS)
Shen, Haoyu; Liu, Yanli; Wu, Hongtao
2018-05-01
To deal with the problem of inverse dynamics modelling for dual arm robot, a recursive inverse dynamics modelling method based on decoupled natural orthogonal complement is presented. In this model, the concepts and methods of Decoupled Natural Orthogonal Complement matrices are used to eliminate the constraint forces in the Newton-Euler kinematic equations, and the screws is used to express the kinematic and dynamics variables. On this basis, the paper has developed a special simulation program with symbol software of Mathematica and conducted a simulation research on the a dual-arm robot. Simulation results show that the proposed method based on decoupled natural orthogonal complement can save an enormous amount of CPU time that was spent in computing compared with the recursive Newton-Euler kinematic equations and the results is correct and reasonable, which can verify the reliability and efficiency of the method.
WEC3: Wave Energy Converter Code Comparison Project: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien
This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to modelmore » hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.« less
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
NASA Astrophysics Data System (ADS)
Chen, Tzikang J.; Shiao, Michael
2016-04-01
This paper verified a generic and efficient assessment concept for probabilistic fatigue life management. The concept is developed based on an integration of damage tolerance methodology, simulations methods1, 2, and a probabilistic algorithm RPI (recursive probability integration)3-9 considering maintenance for damage tolerance and risk-based fatigue life management. RPI is an efficient semi-analytical probabilistic method for risk assessment subjected to various uncertainties such as the variability in material properties including crack growth rate, initial flaw size, repair quality, random process modeling of flight loads for failure analysis, and inspection reliability represented by probability of detection (POD). In addition, unlike traditional Monte Carlo simulations (MCS) which requires a rerun of MCS when maintenance plan is changed, RPI can repeatedly use a small set of baseline random crack growth histories excluding maintenance related parameters from a single MCS for various maintenance plans. In order to fully appreciate the RPI method, a verification procedure was performed. In this study, MC simulations in the orders of several hundred billions were conducted for various flight conditions, material properties, and inspection scheduling, POD and repair/replacement strategies. Since the MC simulations are time-consuming methods, the simulations were conducted parallelly on DoD High Performance Computers (HPC) using a specialized random number generator for parallel computing. The study has shown that RPI method is several orders of magnitude more efficient than traditional Monte Carlo simulations.
a Computer Simulation Study of Coherent Optical Fibre Communication Systems
NASA Astrophysics Data System (ADS)
Urey, Zafer
Available from UMI in association with The British Library. A computer simulation study of coherent optical fibre communication systems is presented in this thesis. The Wiener process is proposed as the simulation model of laser phase noise and verified to be a good one. This model is included in the simulation experiments along with the other noise sources (i.e shot noise, thermal noise and laser intensity noise) and the models that represent the various waveform processing blocks in a system such as filtering, demodulation, etc. A novel mixed-semianalytical simulation procedure is designed and successfully applied for the estimation of bit error rates as low as 10^{-10 }. In this technique the noise processes and the ISI effects at the decision time are characterized from simulation experiments but the calculation of the probability of error is obtained by numerically integrating the noise statistics over the error region using analytical expressions. Simulation of only 4096 bits is found to give estimates of BER's corresponding to received optical power within 1 dB of the theoretical calculations using this approach. This number is very small when compared with the pure simulation techniques. Hence, the technique is proved to be very efficient in terms of the computation time and the memory requirements. A command driven simulation software which runs on a DEC VAX computer under the UNIX operating system is written by the author and a series of simulation experiments are carried out using this software. In particular, the effects of IF filtering on the performance of PSK heterodyne receivers with synchronous demodulation are examined when both the phase noise and the shot noise are included in the simulations. The BER curves of this receiver are estimated for the first time for various cases of IF filtering using the mixed-semianalytical approach. At a power penalty of 1 dB the IF linewidth requirement of this receiver with the matched filter is estimated to be less than 650 kHz at the modulation rate of 1 Gbps and BER of 10 ^{-9}. The IF linewidth requirement for other IF filtering cases are also estimated. The results are not found to be much different from the matched filter case. Therefore, it is concluded that IF filtering does not have any effect for the reduction of phase noise in PSK heterodyne systems with synchronous demodulation.
NASA Astrophysics Data System (ADS)
Ma, J.; Liu, Q.
2018-02-01
This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.
Verifiable Measurement-Only Blind Quantum Computing with Stabilizer Testing.
Hayashi, Masahito; Morimae, Tomoyuki
2015-11-27
We introduce a simple protocol for verifiable measurement-only blind quantum computing. Alice, a client, can perform only single-qubit measurements, whereas Bob, a server, can generate and store entangled many-qubit states. Bob generates copies of a graph state, which is a universal resource state for measurement-based quantum computing, and sends Alice each qubit of them one by one. Alice adaptively measures each qubit according to her program. If Bob is honest, he generates the correct graph state, and, therefore, Alice can obtain the correct computation result. Regarding the security, whatever Bob does, Bob cannot get any information about Alice's computation because of the no-signaling principle. Furthermore, malicious Bob does not necessarily send the copies of the correct graph state, but Alice can check the correctness of Bob's state by directly verifying the stabilizers of some copies.
Verifiable Measurement-Only Blind Quantum Computing with Stabilizer Testing
NASA Astrophysics Data System (ADS)
Hayashi, Masahito; Morimae, Tomoyuki
2015-11-01
We introduce a simple protocol for verifiable measurement-only blind quantum computing. Alice, a client, can perform only single-qubit measurements, whereas Bob, a server, can generate and store entangled many-qubit states. Bob generates copies of a graph state, which is a universal resource state for measurement-based quantum computing, and sends Alice each qubit of them one by one. Alice adaptively measures each qubit according to her program. If Bob is honest, he generates the correct graph state, and, therefore, Alice can obtain the correct computation result. Regarding the security, whatever Bob does, Bob cannot get any information about Alice's computation because of the no-signaling principle. Furthermore, malicious Bob does not necessarily send the copies of the correct graph state, but Alice can check the correctness of Bob's state by directly verifying the stabilizers of some copies.
Development of PARMA: PHITS-based analytical radiation model in the atmosphere.
Sato, Tatsuhiko; Yasuda, Hiroshi; Niita, Koji; Endo, Akira; Sihver, Lembit
2008-08-01
Estimation of cosmic-ray spectra in the atmosphere has been essential for the evaluation of aviation doses. We therefore calculated these spectra by performing Monte Carlo simulation of cosmic-ray propagation in the atmosphere using the PHITS code. The accuracy of the simulation was well verified by experimental data taken under various conditions, even near sea level. Based on a comprehensive analysis of the simulation results, we proposed an analytical model for estimating the cosmic-ray spectra of neutrons, protons, helium ions, muons, electrons, positrons and photons applicable to any location in the atmosphere at altitudes below 20 km. Our model, named PARMA, enables us to calculate the cosmic radiation doses rapidly with a precision equivalent to that of the Monte Carlo simulation, which requires much more computational time. With these properties, PARMA is capable of improving the accuracy and efficiency of the cosmic-ray exposure dose estimations not only for aircrews but also for the public on the ground.
NASA Astrophysics Data System (ADS)
Spörlein, Sebastian; Carstens, Heiko; Satzger, Helmut; Renner, Christian; Behrendt, Raymond; Moroder, Luis; Tavan, Paul; Zinth, Wolfgang; Wachtveitl, Josef
2002-06-01
Femtosecond time-resolved spectroscopy on model peptides with built-in light switches combined with computer simulation of light-triggered motions offers an attractive integrated approach toward the understanding of peptide conformational dynamics. It was applied to monitor the light-induced relaxation dynamics occurring on subnanosecond time scales in a peptide that was backbone-cyclized with an azobenzene derivative as optical switch and spectroscopic probe. The femtosecond spectra permit the clear distinguishing and characterization of the subpicosecond photoisomerization of the chromophore, the subsequent dissipation of vibrational energy, and the subnanosecond conformational relaxation of the peptide. The photochemical cis/trans-isomerization of the chromophore and the resulting peptide relaxations have been simulated with molecular dynamics calculations. The calculated reaction kinetics, as monitored by the energy content of the peptide, were found to match the spectroscopic data. Thus we verify that all-atom molecular dynamics simulations can quantitatively describe the subnanosecond conformational dynamics of peptides, strengthening confidence in corresponding predictions for longer time scales.
High-fidelity simulations of blast loadings in urban environments using an overset meshing strategy
NASA Astrophysics Data System (ADS)
Wang, X.; Remotigue, M.; Arnoldus, Q.; Janus, M.; Luke, E.; Thompson, D.; Weed, R.; Bessette, G.
2017-05-01
Detailed blast propagation and evolution through multiple structures representing an urban environment were simulated using the code Loci/BLAST, which employs an overset meshing strategy. The use of overset meshes simplifies mesh generation by allowing meshes for individual component geometries to be generated independently. Detailed blast propagation and evolution through multiple structures, wave reflection and interaction between structures, and blast loadings on structures were simulated and analyzed. Predicted results showed good agreement with experimental data generated by the US Army Engineer Research and Development Center. Loci/BLAST results were also found to compare favorably to simulations obtained using the Second-Order Hydrodynamic Automatic Mesh Refinement Code (SHAMRC). The results obtained demonstrated that blast reflections in an urban setting significantly increased the blast loads on adjacent buildings. Correlations of computational results with experimental data yielded valuable insights into the physics of blast propagation, reflection, and interaction under an urban setting and verified the use of Loci/BLAST as a viable tool for urban blast analysis.
NASA Astrophysics Data System (ADS)
Hakim, Ammar; Shi, Eric; Juno, James; Bernard, Tess; Hammett, Greg
2017-10-01
For weakly collisional (or collisionless) plasmas, kinetic effects are required to capture the physics of micro-turbulence. We have implemented solvers for kinetic and gyrokinetic equations in the computational plasma physics framework, Gkeyll. We use a version of discontinuous Galerkin scheme that conserves energy exactly. Plasma sheaths are modeled with novel boundary conditions. Positivity of distribution functions is maintained via a reconstruction method, allowing robust simulations that continue to conserve energy even with positivity limiters. We have performed a large number of benchmarks, verifying the accuracy and robustness of our code. We demonstrate the application of our algorithm to two classes of problems (a) Vlasov-Maxwell simulations of turbulence in a magnetized plasma, applicable to space plasmas; (b) Gyrokinetic simulations of turbulence in open-field-line geometries, applicable to laboratory plasmas. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.
Assembly flow simulation of a radar
NASA Technical Reports Server (NTRS)
Rutherford, W. C.; Biggs, P. M.
1994-01-01
A discrete event simulation model has been developed to predict the assembly flow time of a new radar product. The simulation was the key tool employed to identify flow constraints. The radar, production facility, and equipment complement were designed, arranged, and selected to provide the most manufacturable assembly possible. A goal was to reduce the assembly and testing cycle time from twenty-six weeks. A computer software simulation package (SLAM 2) was utilized as the foundation for simulating the assembly flow time. FORTRAN subroutines were incorporated into the software to deal with unique flow circumstances that were not accommodated by the software. Detailed information relating to the assembly operations was provided by a team selected from the engineering, manufacturing management, inspection, and production assembly staff. The simulation verified that it would be possible to achieve the cycle time goal of six weeks. Equipment and manpower constraints were identified during the simulation process and adjusted as required to achieve the flow with a given monthly production requirement. The simulation is being maintained as a planning tool to be used to identify constraints in the event that monthly output is increased. 'What-if' studies have been conducted to identify the cost of reducing constraints caused by increases in output requirement.
Verification of hypergraph states
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Takeuchi, Yuki; Hayashi, Masahito
2017-12-01
Hypergraph states are generalizations of graph states where controlled-Z gates on edges are replaced with generalized controlled-Z gates on hyperedges. Hypergraph states have several advantages over graph states. For example, certain hypergraph states, such as the Union Jack states, are universal resource states for measurement-based quantum computing with only Pauli measurements, while graph state measurement-based quantum computing needs non-Clifford basis measurements. Furthermore, it is impossible to classically efficiently sample measurement results on hypergraph states unless the polynomial hierarchy collapses to the third level. Although several protocols have been proposed to verify graph states with only sequential single-qubit Pauli measurements, there was no verification method for hypergraph states. In this paper, we propose a method for verifying a certain class of hypergraph states with only sequential single-qubit Pauli measurements. Importantly, no i.i.d. property of samples is assumed in our protocol: any artificial entanglement among samples cannot fool the verifier. As applications of our protocol, we consider verified blind quantum computing with hypergraph states, and quantum computational supremacy demonstrations with hypergraph states.
A NARX damper model for virtual tuning of automotive suspension systems with high-frequency loading
NASA Astrophysics Data System (ADS)
Alghafir, M. N.; Dunne, J. F.
2012-02-01
A computationally efficient NARX-type neural network model is developed to characterise highly nonlinear frequency-dependent thermally sensitive hydraulic dampers for use in the virtual tuning of passive suspension systems with high-frequency loading. Three input variables are chosen to account for high-frequency kinematics and temperature variations arising from continuous vehicle operation over non-smooth surfaces such as stone-covered streets, rough or off-road conditions. Two additional input variables are chosen to represent tuneable valve parameters. To assist in the development of the NARX model, a highly accurate but computationally excessive physical damper model [originally proposed by S. Duym and K. Reybrouck, Physical characterization of non-linear shock absorber dynamics, Eur. J. Mech. Eng. M 43(4) (1998), pp. 181-188] is extended to allow for high-frequency input kinematics. Experimental verification of this extended version uses measured damper data obtained from an industrial damper test machine under near-isothermal conditions for fixed valve settings, with input kinematics corresponding to harmonic and random road profiles. The extended model is then used only for simulating data for training and testing the NARX model with specified temperature profiles and different valve parameters, both in isolation and within quarter-car vehicle simulations. A heat generation and dissipation model is also developed and experimentally verified for use within the simulations. Virtual tuning using the quarter-car simulation model then exploits the NARX damper to achieve a compromise between ride and handling under transient thermal conditions with harmonic and random road profiles. For quarter-car simulations, the paper shows that a single tuneable NARX damper makes virtual tuning computationally very attractive.
Torner, Benjamin; Konnigk, Lucas; Hallier, Sebastian; Kumar, Jitendra; Witte, Matthias; Wurm, Frank-Hendrik
2018-06-01
Numerical flow analysis (computational fluid dynamics) in combination with the prediction of blood damage is an important procedure to investigate the hemocompatibility of a blood pump, since blood trauma due to shear stresses remains a problem in these devices. Today, the numerical damage prediction is conducted using unsteady Reynolds-averaged Navier-Stokes simulations. Investigations with large eddy simulations are rarely being performed for blood pumps. Hence, the aim of the study is to examine the viscous shear stresses of a large eddy simulation in a blood pump and compare the results with an unsteady Reynolds-averaged Navier-Stokes simulation. The simulations were carried out at two operation points of a blood pump. The flow was simulated on a 100M element mesh for the large eddy simulation and a 20M element mesh for the unsteady Reynolds-averaged Navier-Stokes simulation. As a first step, the large eddy simulation was verified by analyzing internal dissipative losses within the pump. Then, the pump characteristics and mean and turbulent viscous shear stresses were compared between the two simulation methods. The verification showed that the large eddy simulation is able to reproduce the significant portion of dissipative losses, which is a global indication that the equivalent viscous shear stresses are adequately resolved. The comparison with the unsteady Reynolds-averaged Navier-Stokes simulation revealed that the hydraulic parameters were in agreement, but differences for the shear stresses were found. The results show the potential of the large eddy simulation as a high-quality comparative case to check the suitability of a chosen Reynolds-averaged Navier-Stokes setup and turbulence model. Furthermore, the results lead to suggest that large eddy simulations are superior to unsteady Reynolds-averaged Navier-Stokes simulations when instantaneous stresses are applied for the blood damage prediction.
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
Three-dimensional information hierarchical encryption based on computer-generated holograms
NASA Astrophysics Data System (ADS)
Kong, Dezhao; Shen, Xueju; Cao, Liangcai; Zhang, Hao; Zong, Song; Jin, Guofan
2016-12-01
A novel approach for encrypting three-dimensional (3-D) scene information hierarchically based on computer-generated holograms (CGHs) is proposed. The CGHs of the layer-oriented 3-D scene information are produced by angular-spectrum propagation algorithm at different depths. All the CGHs are then modulated by different chaotic random phase masks generated by the logistic map. Hierarchical encryption encoding is applied when all the CGHs are accumulated one by one, and the reconstructed volume of the 3-D scene information depends on permissions of different users. The chaotic random phase masks could be encoded into several parameters of the chaotic sequences to simplify the transmission and preservation of the keys. Optical experiments verify the proposed method and numerical simulations show the high key sensitivity, high security, and application flexibility of the method.
Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo
2015-01-01
A content-matched (CM) range monitoring query over moving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CM range monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods. PMID:26393613
NASA Technical Reports Server (NTRS)
Ray, R. J.; Myers, L. P.
1986-01-01
The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrated engine-airframe control systems. Performance improvements will result from an adaptive engine stall margin mode, a highly integrated mode that uses the airplane flight conditions and the resulting inlet distortion to continuously compute engine stall margin. When there is excessive stall margin, the engine is uptrimmed for more thrust by increasing engine pressure ratio (EPR). The EPR uptrim logic has been evaluated and implemente into computer simulations. Thrust improvements over 10 percent are predicted for subsonic flight conditions. The EPR uptrim was successfully demonstrated during engine ground tests. Test results verify model predictions at the conditions tested.
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
Verifying different-modality properties for concepts produces switching costs.
Pecher, Diane; Zeelenberg, René; Barsalou, Lawrence W
2003-03-01
According to perceptual symbol systems, sensorimotor simulations underlie the representation of concepts. It follows that sensorimotor phenomena should arise in conceptual processing. Previous studies have shown that switching from one modality to another during perceptual processing incurs a processing cost. If perceptual simulation underlies conceptual processing, then verifying the properties of concepts should exhibit a switching cost as well. For example, verifying a property in the auditory modality (e.g., BLENDER-loud) should be slower after verifying a property in a different modality (e.g., CRANBERRIES-tart) than after verifying a property in the same modality (e.g., LEAVES-rustling). Only words were presented to subjects, and there were no instructions to use imagery. Nevertheless, switching modalities incurred a cost, analogous to the cost of switching modalities in perception. A second experiment showed that this effect was not due to associative priming between properties in the same modality. These results support the hypothesis that perceptual simulation underlies conceptual processing.
Planck 2015 results. XII. Full focal plane simulations
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Castex, G.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Karakci, A.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Melin, J.-B.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Roman, M.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Welikala, N.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
We present the 8th full focal plane simulation set (FFP8), deployed in support of the Planck 2015 results. FFP8 consists of 10 fiducial mission realizations reduced to 18 144 maps, together with the most massive suite of Monte Carlo realizations of instrument noise and CMB ever generated, comprising 104 mission realizations reduced to about 106 maps. The resulting maps incorporate the dominant instrumental, scanning, and data analysis effects, and the remaining subdominant effects will be included in future updates. Generated at a cost of some 25 million CPU-hours spread across multiple high-performance-computing (HPC) platforms, FFP8 is used to validate and verify analysis algorithms and their implementations, and to remove biases from and quantify uncertainties in the results of analyses of the real data.
Investigation of a New Flux-Modulated Permanent Magnet Brushless Motor for EVs
Gu, Lingling; Luo, Yong; Han, Xuedong
2014-01-01
This paper presents a flux-modulated direct drive (FMDD) motor. The key is to integrate the magnetic gear with the PM motor while removing the gear inner-rotor. Hence, the proposed FMDD motor can achieve the low-speed high-torque output and high-speed compact design requirements as well as high-torque density with a simple structure. The output power equation is analytically derived. By using finite element analysis (FEA), the static characteristics of the proposed motor are obtained. Based on these characteristics, the system mathematical model can be established. Hence, the evaluation of system performances is conducted by computer simulation using the Matlab/Simulink. A prototype is designed and built for experimentation. Experimental results are given to verify the theoretical analysis and simulation. PMID:24883405
NASA Astrophysics Data System (ADS)
Shimamura, Kohei
2016-09-01
To reduce the computational cost in the particle method for the numerical simulation of the laser plasma, we examined the simplification of the laser absorption process. Because the laser frequency is sufficiently larger than the collision frequency between the electron and heavy particles, we assumed that the electron obtained the constant value from the laser irradiation. First of all, the simplification of the laser absorption process was verified by the comparison of the EEDF and the laser-absorptivity with PIC-FDTD method. Secondary, the laser plasma induced by TEA CO2 laser in Argon atmosphere was modeled using the 1D3V DSMC method with the simplification of the laser-absorption. As a result, the LSDW was observed with the typical electron and neutral density distribution.
Impact of finite receiver-aperture size in a non-line-of-sight single-scatter propagation model.
Elshimy, Mohamed A; Hranilovic, Steve
2011-12-01
In this paper, a single-scatter propagation model is developed that expands the classical model by considering a finite receiver-aperture size for non-line-of-sight communication. The expanded model overcomes some of the difficulties with the classical model, most notably, inaccuracies in scenarios with short range and low elevation angle where significant scattering takes place near the receiver. The developed model does not approximate the receiver aperture as a point, but uses its dimensions for both field-of-view and solid-angle computations. To verify the model, a Monte Carlo simulation of photon transport in a turbid medium is applied. Simulation results for temporal responses and path losses are presented at a wavelength of 260 nm that lies in the solar-blind ultraviolet region.
Moving Force Identification: a Time Domain Method
NASA Astrophysics Data System (ADS)
Law, S. S.; Chan, T. H. T.; Zeng, Q. H.
1997-03-01
The solution for the vertical dynamic interaction forces between a moving vehicle and the bridge deck is analytically derived and experimentally verified. The deck is modelled as a simply supported beam with viscous damping, and the vehicle/bridge interaction force is modelled as one-point or two-point loads with fixed axle spacing, moving at constant speed. The method is based on modal superposition and is developed to identify the forces in the time domain. Both cases of one-point and two-point forces moving on a simply supported beam are simulated. Results of laboratory tests on the identification of the vehicle/bridge interaction forces are presented. Computation simulations and laboratory tests show that the method is effective, and acceptable results can be obtained by combining the use of bending moment and acceleration measurements.
GIDL analysis of the process variation effect in gate-all-around nanowire FET
NASA Astrophysics Data System (ADS)
Kim, Shinkeun; Seo, Youngsoo; Lee, Jangkyu; Kang, Myounggon; Shin, Hyungcheol
2018-02-01
In this paper, the gate-induced drain leakage (GIDL) is analyzed on gate-all-around (GAA) Nanowire FET (NW FET) with ellipse-shaped channel induced by process variation effect (PVE). The fabrication process of nanowire can lead to change the shape of channel cross section from circle to ellipse. The effect of distorted channel shape is investigated and verified by technology computer-aided design (TCAD) simulation in terms of the GIDL current. The simulation results demonstrate that the components of GIDL current are two mechanisms of longitudinal band-to-band tunneling (L-BTBT) at body/drain junction and transverse band-to-band tunneling (T-BTBT) at gate/drain junction. These two mechanisms are investigated on channel radius (rnw) and aspect ratio of ellipse-shape respectively and together.
Development of a high-power neutron-producing lithium target for boron neutron capture therapy
NASA Astrophysics Data System (ADS)
Brown, Adam V.; Scott, Malcolm C.
2000-12-01
A neutron producing lithium target for a novel, accelerator based cancer treatment requires the removal of up to 6kW of heat produced by 1-2mA beam of 2.3-3.0MeV protons. This paper presents the results form computer simulations which show that, using submerged jet cooling, a solid lithium target can be maintained up to 1.6mA, and a liquid target up to 2.6mA, assuming a 3.0MeV proton beam. The predictions from the simulations are verified through the use of an experimental heat transfer test-rig and the result form a number of metallurgical studies made to select a compatible substrate material for the lithium are reported.
Investigation of a new flux-modulated permanent magnet brushless motor for EVs.
Fan, Ying; Gu, Lingling; Luo, Yong; Han, Xuedong; Cheng, Ming
2014-01-01
This paper presents a flux-modulated direct drive (FMDD) motor. The key is to integrate the magnetic gear with the PM motor while removing the gear inner-rotor. Hence, the proposed FMDD motor can achieve the low-speed high-torque output and high-speed compact design requirements as well as high-torque density with a simple structure. The output power equation is analytically derived. By using finite element analysis (FEA), the static characteristics of the proposed motor are obtained. Based on these characteristics, the system mathematical model can be established. Hence, the evaluation of system performances is conducted by computer simulation using the Matlab/Simulink. A prototype is designed and built for experimentation. Experimental results are given to verify the theoretical analysis and simulation.
Simulating Progressive Damage of Notched Composite Laminates with Various Lamination Schemes
NASA Astrophysics Data System (ADS)
Mandal, B.; Chakrabarti, A.
2017-05-01
A three dimensional finite element based progressive damage model has been developed for the failure analysis of notched composite laminates. The material constitutive relations and the progressive damage algorithms are implemented into finite element code ABAQUS using user-defined subroutine UMAT. The existing failure criteria for the composite laminates are modified by including the failure criteria for fiber/matrix shear damage and delamination effects. The proposed numerical model is quite efficient and simple compared to other progressive damage models available in the literature. The efficiency of the present constitutive model and the computational scheme is verified by comparing the simulated results with the results available in the literature. A parametric study has been carried out to investigate the effect of change in lamination scheme on the failure behaviour of notched composite laminates.
Analysis of speckle and material properties in laider tracer
NASA Astrophysics Data System (ADS)
Ross, Jacob W.; Rigling, Brian D.; Watson, Edward A.
2017-04-01
The SAL simulation tool Laider Tracer models speckle: the random variation in intensity of an incident light beam across a rough surface. Within Laider Tracer, the speckle field is modeled as a 2-D array of jointly Gaussian random variables projected via ray tracing onto the scene of interest. Originally, all materials in Laider Tracer were treated as ideal diffuse scatterers, for which the far-field return computed uses the Lambertian Bidirectional Reflectance Distribution Function (BRDF). As presented here, we implement material properties into Laider Tracer via the Non-conventional Exploitation Factors Data System: a database of properties for thousands of different materials sampled at various wavelengths and incident angles. We verify the intensity behavior as a function of incident angle after material properties are added to the simulation.
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.
Xu, Qun; Wang, Xianchao; Xu, Chao
2017-06-01
Multiplication with traditional electronic computers is faced with a low calculating accuracy and a long computation time delay. To overcome these problems, the modified signed digit (MSD) multiplication routine is established based on the MSD system and the carry-free adder. Also, its parallel algorithm and optimization techniques are studied in detail. With the help of a ternary optical computer's characteristics, the structured data processor is designed especially for the multiplication routine. Several ternary optical operators are constructed to perform M transformations and summations in parallel, which has accelerated the iterative process of multiplication. In particular, the routine allocates data bits of the ternary optical processor based on digits of multiplication input, so the accuracy of the calculation results can always satisfy the users. Finally, the routine is verified by simulation experiments, and the results are in full compliance with the expectations. Compared with an electronic computer, the MSD multiplication routine is not only good at dealing with large-value data and high-precision arithmetic, but also maintains lower power consumption and fewer calculating delays.
Reprogrammable logic in memristive crossbar for in-memory computing
NASA Astrophysics Data System (ADS)
Cheng, Long; Zhang, Mei-Yun; Li, Yi; Zhou, Ya-Xiong; Wang, Zhuo-Rui; Hu, Si-Yu; Long, Shi-Bing; Liu, Ming; Miao, Xiang-Shui
2017-12-01
Memristive stateful logic has emerged as a promising next-generation in-memory computing paradigm to address escalating computing-performance pressures in traditional von Neumann architecture. Here, we present a nonvolatile reprogrammable logic method that can process data between different rows and columns in a memristive crossbar array based on material implication (IMP) logic. Arbitrary Boolean logic can be executed with a reprogrammable cell containing four memristors in a crossbar array. In the fabricated Ti/HfO2/W memristive array, some fundamental functions, such as universal NAND logic and data transfer, were experimentally implemented. Moreover, using eight memristors in a 2 × 4 array, a one-bit full adder was theoretically designed and verified by simulation to exhibit the feasibility of our method to accomplish complex computing tasks. In addition, some critical logic-related performances were further discussed, such as the flexibility of data processing, cascading problem and bit error rate. Such a method could be a step forward in developing IMP-based memristive nonvolatile logic for large-scale in-memory computing architecture.
Seismic wavefield propagation in 2D anisotropic media: Ray theory versus wave-equation simulation
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; Hu, Guang-yi; Zhang, Yan-teng; Li, Zhong-sheng
2014-05-01
Despite the ray theory that is based on the high frequency assumption of the elastic wave-equation, the ray theory and the wave-equation simulation methods should be mutually proof of each other and hence jointly developed, but in fact parallel independent progressively. For this reason, in this paper we try an alternative way to mutually verify and test the computational accuracy and the solution correctness of both the ray theory (the multistage irregular shortest-path method) and the wave-equation simulation method (both the staggered finite difference method and the pseudo-spectral method) in anisotropic VTI and TTI media. Through the analysis and comparison of wavefield snapshot, common source gather profile and synthetic seismogram, it is able not only to verify the accuracy and correctness of each of the methods at least for kinematic features, but also to thoroughly understand the kinematic and dynamic features of the wave propagation in anisotropic media. The results show that both the staggered finite difference method and the pseudo-spectral method are able to yield the same results even for complex anisotropic media (such as a fault model); the multistage irregular shortest-path method is capable of predicting similar kinematic features as the wave-equation simulation method does, which can be used to mutually test each other for methodology accuracy and solution correctness. In addition, with the aid of the ray tracing results, it is easy to identify the multi-phases (or multiples) in the wavefield snapshot, common source point gather seismic section and synthetic seismogram predicted by the wave-equation simulation method, which is a key issue for later seismic application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonen, F.A.; Johnson, K.I.; Liebetrau, A.M.
The VISA-II (Vessel Integrity Simulation Analysis code was originally developed as part of the NRC staff evaluation of pressurized thermal shock. VISA-II uses Monte Carlo simulation to evaluate the failure probability of a pressurized water reactor (PWR) pressure vessel subjected to a pressure and thermal transient specified by the user. Linear elastic fracture mechanics methods are used to model crack initiation and propagation. Parameters for initial crack size and location, copper content, initial reference temperature of the nil-ductility transition, fluence, crack-initiation fracture toughness, and arrest fracture toughness are treated as random variables. This report documents an upgraded version of themore » original VISA code as described in NUREG/CR-3384. Improvements include a treatment of cladding effects, a more general simulation of flaw size, shape and location, a simulation of inservice inspection, an updated simulation of the reference temperature of the nil-ductility transition, and treatment of vessels with multiple welds and initial flaws. The code has been extensively tested and verified and is written in FORTRAN for ease of installation on different computers. 38 refs., 25 figs.« less
MD Simulations of tRNA and Aminoacyl-tRNA Synthetases: Dynamics, Folding, Binding, and Allostery
Li, Rongzhong; Macnamara, Lindsay M.; Leuchter, Jessica D.; Alexander, Rebecca W.; Cho, Samuel S.
2015-01-01
While tRNA and aminoacyl-tRNA synthetases are classes of biomolecules that have been extensively studied for decades, the finer details of how they carry out their fundamental biological functions in protein synthesis remain a challenge. Recent molecular dynamics (MD) simulations are verifying experimental observations and providing new insight that cannot be addressed from experiments alone. Throughout the review, we briefly discuss important historical events to provide a context for how far the field has progressed over the past few decades. We then review the background of tRNA molecules, aminoacyl-tRNA synthetases, and current state of the art MD simulation techniques for those who may be unfamiliar with any of those fields. Recent MD simulations of tRNA dynamics and folding and of aminoacyl-tRNA synthetase dynamics and mechanistic characterizations are discussed. We highlight the recent successes and discuss how important questions can be addressed using current MD simulations techniques. We also outline several natural next steps for computational studies of AARS:tRNA complexes. PMID:26184179
McKenzie, J.M.; Voss, C.I.; Siegel, D.I.
2007-01-01
In northern peatlands, subsurface ice formation is an important process that can control heat transport, groundwater flow, and biological activity. Temperature was measured over one and a half years in a vertical profile in the Red Lake Bog, Minnesota. To successfully simulate the transport of heat within the peat profile, the U.S. Geological Survey's SUTRA computer code was modified. The modified code simulates fully saturated, coupled porewater-energy transport, with freezing and melting porewater, and includes proportional heat capacity and thermal conductivity of water and ice, decreasing matrix permeability due to ice formation, and latent heat. The model is verified by correctly simulating the Lunardini analytical solution for ice formation in a porous medium with a mixed ice-water zone. The modified SUTRA model correctly simulates the temperature and ice distributions in the peat bog. Two possible benchmark problems for groundwater and energy transport with ice formation and melting are proposed that may be used by other researchers for code comparison. ?? 2006 Elsevier Ltd. All rights reserved.
A simulation model of IT risk on program trading
NASA Astrophysics Data System (ADS)
Xia, Bingying; Jiang, Wenbao; Luo, Guangxuan
2015-12-01
The biggest difficulty for Program trading IT risk measures lies in the loss of data, in view of this situation, the current scholars approach is collecting court, network and other public media such as all kinds of accident of IT both at home and abroad for data collection, and the loss of IT risk quantitative analysis based on this database. However, the IT risk loss database established by this method can only fuzzy reflect the real situation and not for real to make fundamental explanation. In this paper, based on the study of the concept and steps of the MC simulation, we use computer simulation method, by using the MC simulation method in the "Program trading simulation system" developed by team to simulate the real programming trading and get the IT risk loss of data through its IT failure experiment, at the end of the article, on the effectiveness of the experimental data is verified. In this way, better overcome the deficiency of the traditional research method and solves the problem of lack of IT risk data in quantitative research. More empirically provides researchers with a set of simulation method are used to study the ideas and the process template.
Formalization, equivalence and generalization of basic resonance electrical circuits
NASA Astrophysics Data System (ADS)
Penev, Dimitar; Arnaudov, Dimitar; Hinov, Nikolay
2017-12-01
In the work are presented basic resonance circuits, which are used in resonance energy converters. The following resonant circuits are considered: serial, serial with parallel load parallel capacitor, parallel and parallel with serial loaded inductance. For the circuits under consideration, expressions are generated for the frequencies of own oscillations and for the equivalence of the active power emitted in the load. Mathematical expressions are graphically constructed and verified using computer simulations. The results obtained are used in the model based design of resonant energy converters with DC or AC output. This guaranteed the output indicators of power electronic devices.
Bottlenecks of the wavefront sensor based on the Talbot effect.
Podanchuk, Dmytro; Kovalenko, Andrey; Kurashov, Vitalij; Kotov, Myhaylo; Goloborodko, Andrey; Danko, Volodymyr
2014-04-01
Physical constraints and peculiarities of the wavefront sensing technique, based on the Talbot effect, are discussed. The limitation on the curvature of the measurable wavefront is derived. The requirements to the Fourier spectrum of the periodic mask are formulated. Two kinds of masks are studied for their performance in the wavefront sensor. It is shown that the boundary part of the mask aperture does not contribute to the initial data for wavefront restoration. It is verified by experiment and computer simulation that the performance of the Talbot sensor, which meets established conditions, is similar to that of the Shack-Hartmann sensor.
Matsushima, Kyoji
2008-07-01
Rotational transformation based on coordinate rotation in Fourier space is a useful technique for simulating wave field propagation between nonparallel planes. This technique is characterized by fast computation because the transformation only requires executing a fast Fourier transform twice and a single interpolation. It is proved that the formula of the rotational transformation mathematically satisfies the Helmholtz equation. Moreover, to verify the formulation and its usefulness in wave optics, it is also demonstrated that the transformation makes it possible to reconstruct an image on arbitrarily tilted planes from a wave field captured experimentally by using digital holography.
A Nonlinear Dynamic Model and Free Vibration Analysis of Deployable Mesh Reflectors
NASA Technical Reports Server (NTRS)
Shi, H.; Yang, B.; Thomson, M.; Fang, H.
2011-01-01
This paper presents a dynamic model of deployable mesh reflectors, in which geometric and material nonlinearities of such a space structure are fully described. Then, by linearization around an equilibrium configuration of the reflector structure, a linearized model is obtained. With this linearized model, the natural frequencies and mode shapes of a reflector can be computed. The nonlinear dynamic model of deployable mesh reflectors is verified by using commercial finite element software in numerical simulation. As shall be seen, the proposed nonlinear model is useful for shape (surface) control of deployable mesh reflectors under thermal loads.
Experimental verification of the rainbow trapping effect in adiabatic plasmonic gratings
Gan, Qiaoqiang; Gao, Yongkang; Wagner, Kyle; Vezenov, Dmitri; Ding, Yujie J.; Bartoli, Filbert J.
2011-01-01
We report the experimental observation of a trapped rainbow in adiabatically graded metallic gratings, designed to validate theoretical predictions for this unique plasmonic structure. One-dimensional graded nanogratings were fabricated and their surface dispersion properties tailored by varying the grating groove depth, whose dimensions were confirmed by atomic force microscopy. Tunable plasmonic bandgaps were observed experimentally, and direct optical measurements on graded grating structures show that light of different wavelengths in the 500–700-nm region is “trapped” at different positions along the grating, consistent with computer simulations, thus verifying the “rainbow” trapping effect. PMID:21402936
NASA Technical Reports Server (NTRS)
Wilson, Timmy R.; Beech, Geoffrey; Johnston, Ian
2009-01-01
The NESC Assessment Team reviewed a computer simulation of the LC-39 External Tank (ET) GH2 Vent Umbilical system developed by United Space Alliance (USA) for the Space Shuttle Program (SSP) and designated KSC Analytical Tool ID 451 (KSC AT-451). The team verified that the vent arm kinematics were correctly modeled, but noted that there were relevant system sensitivities. Also, the structural stiffness used in the math model varied somewhat from the analytic calculations. Results of the NESC assessment were communicated to the model developers.
Bifurcation analysis of dengue transmission model in Baguio City, Philippines
NASA Astrophysics Data System (ADS)
Libatique, Criselda P.; Pajimola, Aprimelle Kris J.; Addawe, Joel M.
2017-11-01
In this study, we formulate a deterministic model for the transmission dynamics of dengue fever in Baguio City, Philippines. We analyzed the existence of the equilibria of the dengue model. We computed and obtained conditions for the existence of the equilibrium states. Stability analysis for the system is carried out for disease free equilibrium. We showed that the system becomes stable under certain conditions of the parameters. A particular parameter is taken and with the use of the Theory of Centre Manifold, the proposed model demonstrates a bifurcation phenomenon. We performed numerical simulation to verify the analytical results.
NASA Technical Reports Server (NTRS)
Burns, W. W., III; Wilson, T. G.
1976-01-01
State-plane analysis techniques are employed to study the voltage step up energy storage dc-to-dc converter. Within this framework, an example converter operating under the influence of a constant on time and a constant frequency controller is examined. Qualitative insight gained through this approach is used to develop a conceptual free running control law for the voltage step up converter which can achieve steady state operation in one on/off cycle of control. Digital computer simulation data is presented to illustrate and verify the theoretical discussions presented.
NASA Astrophysics Data System (ADS)
Jiang, Zhen-Yu; Li, Lin; Huang, Yi-Fan
2009-07-01
The segmented mirror telescope is widely used. The aberrations of segmented mirror systems are different from single mirror systems. This paper uses the Fourier optics theory to analyse the Zernike aberrations of segmented mirror systems. It concludes that the Zernike aberrations of segmented mirror systems obey the linearity theorem. The design of a segmented space telescope and segmented schemes are discussed, and its optical model is constructed. The computer simulation experiment is performed with this optical model to verify the suppositions. The experimental results confirm the correctness of the model.
Fuzzy Logic Based Controller for a Grid-Connected Solid Oxide Fuel Cell Power Plant.
Chatterjee, Kalyan; Shankar, Ravi; Kumar, Amit
2014-10-01
This paper describes a mathematical model of a solid oxide fuel cell (SOFC) power plant integrated in a multimachine power system. The utilization factor of a fuel stack maintains steady state by tuning the fuel valve in the fuel processor at a rate proportional to a current drawn from the fuel stack. A suitable fuzzy logic control is used for the overall system, its objective being controlling the current drawn by the power conditioning unit and meet a desirable output power demand. The proposed control scheme is verified through computer simulations.
Study on the leakage flow through a clearance gap between two stationary walls
NASA Astrophysics Data System (ADS)
Zhao, W.; Billdal, J. T.; Nielsen, T. K.; Brekke, H.
2012-11-01
In the present paper, the leakage flow in the clearance gap between stationary walls was studied experimentally, theoretically and numerically by the computational fluid dynamics (CFD) in order to find the relationship between leakage flow, pressure difference and clearance gap. The experimental set-up of the clearance gap between two stationary walls is the simplification of the gap between the guide vane faces and facing plates in Francis turbines. This model was built in the Waterpower laboratory at Norwegian University of Science and Technology (NTNU). The empirical formula for calculating the leakage flow rate between the two stationary walls was derived from the empirical study. The experimental model is simulated by computational fluid dynamics employing the ANSYS CFX commercial software in order to study the flow structure. Both numerical simulation results and empirical formula results are in good agreement with the experimental results. The correction of the empirical formula is verified by experimental data and has been proven to be very useful in terms of quickly predicting the leakage flow rate in the guide vanes for hydraulic turbines.
NASA Astrophysics Data System (ADS)
Hoffmann, T. L.; Lieb, S.; Pauldrach, A. W. A.; Lesch, H.; Hultzsch, P. J. N.; Birk, G. T.
2012-08-01
Aims: The aim of this work is to verify whether turbulent magnetic reconnection can provide the additional energy input required to explain the up to now only poorly understood ionization mechanism of the diffuse ionized gas (DIG) in galaxies and its observed emission line spectra. Methods: We use a detailed non-LTE radiative transfer code that does not make use of the usual restrictive gaseous nebula approximations to compute synthetic spectra for gas at low densities. Excitation of the gas is via an additional heating term in the energy balance as well as by photoionization. Numerical values for this heating term are derived from three-dimensional resistive magnetohydrodynamic two-fluid plasma-neutral-gas simulations to compute energy dissipation rates for the DIG under typical conditions. Results: Our simulations show that magnetic reconnection can liberate enough energy to by itself fully or partially ionize the gas. However, synthetic spectra from purely thermally excited gas are incompatible with the observed spectra; a photoionization source must additionally be present to establish the correct (observed) ionization balance in the gas.
van Iersel, Leo; Kelk, Steven; Lekić, Nela; Scornavacca, Celine
2014-05-05
Reticulate events play an important role in determining evolutionary relationships. The problem of computing the minimum number of such events to explain discordance between two phylogenetic trees is a hard computational problem. Even for binary trees, exact solvers struggle to solve instances with reticulation number larger than 40-50. Here we present CycleKiller and NonbinaryCycleKiller, the first methods to produce solutions verifiably close to optimality for instances with hundreds or even thousands of reticulations. Using simulations, we demonstrate that these algorithms run quickly for large and difficult instances, producing solutions that are very close to optimality. As a spin-off from our simulations we also present TerminusEst, which is the fastest exact method currently available that can handle nonbinary trees: this is used to measure the accuracy of the NonbinaryCycleKiller algorithm. All three methods are based on extensions of previous theoretical work (SIDMA 26(4):1635-1656, TCBB 10(1):18-25, SIDMA 28(1):49-66) and are publicly available. We also apply our methods to real data.
Mass transfer effect of the stalk contraction-relaxation cycle of Vorticella convallaria
NASA Astrophysics Data System (ADS)
Zhou, Jiazhong; Admiraal, David; Ryu, Sangjin
2014-11-01
Vorticella convallaria is a genus of protozoa living in freshwater. Its stalk contracts and coil pulling the cell body towards the substrate at a remarkable speed, and then relaxes to its extended state much more slowly than the contraction. However, the reason for Vorticella's stalk contraction is still unknown. It is presumed that water flow induced by the stalk contraction-relaxation cycle may augment mass transfer near the substrate. We investigated this hypothesis using an experimental model with particle tracking velocimetry and a computational fluid dynamics model. In both approaches, Vorticella was modeled as a solid sphere translating perpendicular to a solid surface in water. After having been validated by the experimental model and verified by grid convergence index test, the computational model simulated water flow during the cycle based on the measured time course of stalk length changes of Vorticella. Based on the simulated flow field, we calculated trajectories of particles near the model Vorticella, and then evaluated the mass transfer effect of Vorticella's stalk contraction based on the particles' motion. We acknowlege support from Laymann Seed Grant of the University of Nebraska-Lincoln.
Atomistic modeling of thermomechanical properties of SWNT/Epoxy nanocomposites
NASA Astrophysics Data System (ADS)
Fasanella, Nicholas; Sundararaghavan, Veera
2015-09-01
Molecular dynamics simulations are performed to compute thermomechanical properties of cured epoxy resins reinforced with pristine and covalently functionalized carbon nanotubes. A DGEBA-DDS epoxy network was built using the ‘dendrimer’ growth approach where 75% of available epoxy sites were cross-linked. The epoxy model is verified through comparisons to experiments, and simulations are performed on nanotube reinforced cross-linked epoxy matrix using the CVFF force field in LAMMPS. Full stiffness matrices and linear coefficient of thermal expansion vectors are obtained for the nanocomposite. Large increases in stiffness and large decreases in thermal expansion were seen along the direction of the nanotube for both nanocomposite systems when compared to neat epoxy. The direction transverse to nanotube saw a 40% increase in stiffness due to covalent functionalization over neat epoxy at 1 K whereas the pristine nanotube system only saw a 7% increase due to van der Waals effects. The functionalized SWNT/epoxy nanocomposite showed an additional 42% decrease in thermal expansion along the nanotube direction when compared to the pristine SWNT/epoxy nanocomposite. The stiffness matrices are rotated over every possible orientation to simulate the effects of an isotropic system of randomly oriented nanotubes in the epoxy. The randomly oriented covalently functionalized SWNT/Epoxy nanocomposites showed substantial improvements over the plain epoxy in terms of higher stiffness (200% increase) and lower thermal expansion (32% reduction). Through MD simulations, we develop means to build simulation cells, perform annealing to reach correct densities, compute thermomechanical properties and compare with experiments.
NASA Astrophysics Data System (ADS)
Bilyeu, David
This dissertation presents an extension of the Conservation Element Solution Element (CESE) method from second- to higher-order accuracy. The new method retains the favorable characteristics of the original second-order CESE scheme, including (i) the use of the space-time integral equation for conservation laws, (ii) a compact mesh stencil, (iii) the scheme will remain stable up to a CFL number of unity, (iv) a fully explicit, time-marching integration scheme, (v) true multidimensionality without using directional splitting, and (vi) the ability to handle two- and three-dimensional geometries by using unstructured meshes. This algorithm has been thoroughly tested in one, two and three spatial dimensions and has been shown to obtain the desired order of accuracy for solving both linear and non-linear hyperbolic partial differential equations. The scheme has also shown its ability to accurately resolve discontinuities in the solutions. Higher order unstructured methods such as the Discontinuous Galerkin (DG) method and the Spectral Volume (SV) methods have been developed for one-, two- and three-dimensional application. Although these schemes have seen extensive development and use, certain drawbacks of these methods have been well documented. For example, the explicit versions of these two methods have very stringent stability criteria. This stability criteria requires that the time step be reduced as the order of the solver increases, for a given simulation on a given mesh. The research presented in this dissertation builds upon the work of Chang, who developed a fourth-order CESE scheme to solve a scalar one-dimensional hyperbolic partial differential equation. The completed research has resulted in two key deliverables. The first is a detailed derivation of a high-order CESE methods on unstructured meshes for solving the conservation laws in two- and three-dimensional spaces. The second is the code implementation of these numerical methods in a computer code. For code development, a one-dimensional solver for the Euler equations was developed. This work is an extension of Chang's work on the fourth-order CESE method for solving a one-dimensional scalar convection equation. A generic formulation for the nth-order CESE method, where n ≥ 4, was derived. Indeed, numerical implementation of the scheme confirmed that the order of convergence was consistent with the order of the scheme. For the two- and three-dimensional solvers, SOLVCON was used as the basic framework for code implementation. A new solver kernel for the fourth-order CESE method has been developed and integrated into the framework provided by SOLVCON. The main part of SOLVCON, which deals with unstructured meshes and parallel computing, remains intact. The SOLVCON code for data transmission between computer nodes for High Performance Computing (HPC). To validate and verify the newly developed high-order CESE algorithms, several one-, two- and three-dimensional simulations where conducted. For the arbitrary order, one-dimensional, CESE solver, three sets of governing equations were selected for simulation: (i) the linear convection equation, (ii) the linear acoustic equations, (iii) the nonlinear Euler equations. All three systems of equations were used to verify the order of convergence through mesh refinement. In addition the Euler equations were used to solve the Shu-Osher and Blastwave problems. These two simulations demonstrated that the new high-order CESE methods can accurately resolve discontinuities in the flow field.For the two-dimensional, fourth-order CESE solver, the Euler equation was employed in four different test cases. The first case was used to verify the order of convergence through mesh refinement. The next three cases demonstrated the ability of the new solver to accurately resolve discontinuities in the flows. This was demonstrated through: (i) the interaction between acoustic waves and an entropy pulse, (ii) supersonic flow over a circular blunt body, (iii) supersonic flow over a guttered wedge. To validate and verify the three-dimensional, fourth-order CESE solver, two different simulations where selected. The first used the linear convection equations to demonstrate fourth-order convergence. The second used the Euler equations to simulate supersonic flow over a spherical body to demonstrate the scheme's ability to accurately resolve shocks. All test cases used are well known benchmark problems and as such, there are multiple sources available to validate the numerical results. Furthermore, the simulations showed that the high-order CESE solver was stable at a CFL number near unity.
NASA Astrophysics Data System (ADS)
Guo, Minghuan; Sun, Feihu; Wang, Zhifeng
2017-06-01
The solar tower concentrator is mainly composed of the central receiver on the tower top and the heliostat field around the tower. The optical efficiencies of a solar tower concentrator are important to the whole thermal performance of the solar tower collector, and the aperture plane of a cavity receiver or the (inner or external) absorbing surface of any central receiver is a key interface of energy flux. So it is necessary to simulate and analyze the concentrated time-changing solar flux density distributions on the flat or curved receiving surface of the collector, with main optical errors considered. The transient concentrated solar flux on the receiving surface is the superimposition of the flux density distributions of all the normal working heliostats in the field. In this paper, we will mainly introduce a new backward ray tracing (BRT) method combined with the lumped effective solar cone, to simulate the flux density map on the receiving-surface. For BRT, bundles of rays are launched at the receiving-surface points of interest, strike directly on the valid cell centers among the uniformly sampled mirror cell centers in the mirror surface of the heliostats, and then direct to the effective solar cone around the incident sun beam direction after reflection. All the optical errors are convoluted into the effective solar cone. The brightness distribution of the effective solar cone is here supposed to be circular Gaussian type. The mirror curvature can be adequately formulated by certain number of local normal vectors at the mirror cell centers of a heliostat. The shading & blocking mirror region of a heliostat by neighbor heliostats and also the solar tower shading on the heliostat mirror are all computed on the flat-ground-plane platform, i.e., projecting the mirror contours and the envelope cylinder of the tower onto the horizontal ground plane along the sun-beam incident direction or along the reflection directions. If the shading projection of a sampled mirror point of the current heliostat is inside the shade cast of a neighbor heliostat or in the shade cast of the tower, this mirror point should be shaded from the incident sun beam. A code based on this new ray tracing method for the 1MW Badaling solar tower power plant in Beijing has been developed using MATLAB. There are 100 azimuth-elevation tracking heliostats in the solar field and the total tower is 118 meters high. The mirror surface of the heliostats is 10m wide and 10m long, it is composed of 8 rows × 8 columns of square mirror facets and each mirror facet has the size of 1.25m×1.25m. This code also was verified by two sets of sun-beam concentrating experiments of the heliostat field on the June 14, 2015. One set of optical experiments were conducted between some typical heliostats to verify the shading & blocking computation of the code, since shading & blocking computation is the most complicated, time-consuming and important optical computing section of the code. The other set of solar concentrating tests were carried out on the field center heliostat (No. 78) to verify the simulated the solar flux images on the white target region of the northern wall of the tower. The target center is 74.5 m high to the ground plane.
Reagan, Andrew J; Dubief, Yves; Dodds, Peter Sheridan; Danforth, Christopher M
2016-01-01
A thermal convection loop is a annular chamber filled with water, heated on the bottom half and cooled on the top half. With sufficiently large forcing of heat, the direction of fluid flow in the loop oscillates chaotically, dynamics analogous to the Earth's weather. As is the case for state-of-the-art weather models, we only observe the statistics over a small region of state space, making prediction difficult. To overcome this challenge, data assimilation (DA) methods, and specifically ensemble methods, use the computational model itself to estimate the uncertainty of the model to optimally combine these observations into an initial condition for predicting the future state. Here, we build and verify four distinct DA methods, and then, we perform a twin model experiment with the computational fluid dynamics simulation of the loop using the Ensemble Transform Kalman Filter (ETKF) to assimilate observations and predict flow reversals. We show that using adaptively shaped localized covariance outperforms static localized covariance with the ETKF, and allows for the use of less observations in predicting flow reversals. We also show that a Dynamic Mode Decomposition (DMD) of the temperature and velocity fields recovers the low dimensional system underlying reversals, finding specific modes which together are predictive of reversal direction.
PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems
Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota
2016-01-01
PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems. PMID:27174940
NASA Astrophysics Data System (ADS)
Tsao, Thomas R.; Tsao, Doris
1997-04-01
In the 1980's, neurobiologist suggested a simple mechanism in primate visual cortex for maintaining a stable and invariant representation of a moving object. The receptive field of visual neurons has real-time transforms in response to motion, to maintain a stable representation. When the visual stimulus is changed due to motion, the geometric transform of the stimulus triggers a dual transform of the receptive field. This dual transform in the receptive fields compensates geometric variation in the stimulus. This process can be modelled using a Lie group method. The massive array of affine parameter sensing circuits will function as a smart sensor tightly coupled to the passive imaging sensor (retina). Neural geometric engine is a neuromorphic computing device simulating our Lie group model of spatial perception of primate's primal visual cortex. We have developed the computer simulation and experimented on realistic and synthetic image data, and performed a preliminary research of using analog VLSI technology for implementation of the neural geometric engine. We have benchmark tested on DMA's terrain data with their result and have built an analog integrated circuit to verify the computational structure of the engine. When fully implemented on ANALOG VLSI chip, we will be able to accurately reconstruct a 3D terrain surface in real-time from stereoscopic imagery.
Reagan, Andrew J.; Dubief, Yves; Dodds, Peter Sheridan; Danforth, Christopher M.
2016-01-01
A thermal convection loop is a annular chamber filled with water, heated on the bottom half and cooled on the top half. With sufficiently large forcing of heat, the direction of fluid flow in the loop oscillates chaotically, dynamics analogous to the Earth’s weather. As is the case for state-of-the-art weather models, we only observe the statistics over a small region of state space, making prediction difficult. To overcome this challenge, data assimilation (DA) methods, and specifically ensemble methods, use the computational model itself to estimate the uncertainty of the model to optimally combine these observations into an initial condition for predicting the future state. Here, we build and verify four distinct DA methods, and then, we perform a twin model experiment with the computational fluid dynamics simulation of the loop using the Ensemble Transform Kalman Filter (ETKF) to assimilate observations and predict flow reversals. We show that using adaptively shaped localized covariance outperforms static localized covariance with the ETKF, and allows for the use of less observations in predicting flow reversals. We also show that a Dynamic Mode Decomposition (DMD) of the temperature and velocity fields recovers the low dimensional system underlying reversals, finding specific modes which together are predictive of reversal direction. PMID:26849061
Pan, Yuxi; Qiu, Rui; Gao, Linfeng; Ge, Chaoyong; Zheng, Junzheng; Xie, Wenzhang; Li, Junli
2014-09-21
With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations.
NASA Astrophysics Data System (ADS)
Pan, Yuxi; Qiu, Rui; Gao, Linfeng; Ge, Chaoyong; Zheng, Junzheng; Xie, Wenzhang; Li, Junli
2014-09-01
With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations.
NASA Technical Reports Server (NTRS)
Clapp, Brian R.; Sills, Joel W., Jr.; Voorhees, Carl R.; Griffin, Thomas J. (Technical Monitor)
2002-01-01
The Vibration Admittance Test (VET) was performed to measure the emitted disturbances of the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cryogenic Cooler (NCC) in preparation for NCC installation onboard the Hubble Space Telescope (HST) during Servicing Mission 3B (SM3B). Details of the VET ground-test are described, including facility characteristics, sensor complement and configuration, NCC suspension, and background noise measurements. Kinematic equations used to compute NCC mass center displacements and accelerations from raw measurements are presented, and dynamic equations of motion for the NCC VET system are developed and verified using modal test data. A MIMO linear frequency-domain analysis method is used to compute NCC-induced loads and HST boresight jitter from VET measurements. These results are verified by a nonlinear time-domain analysis approach using a high-fidelity structural dynamics and pointing control simulation for HST. NCC emitted acceleration levels not exceeding 35 micro-g rms were measured in the VET and analysis methods herein predict 3.1 milli-areseconds rms jitter for HST on-orbit. Because the NCC is predicted to become the predominant disturbance source for HST, VET results indicate that HST will continue to meet the 7 milli-arcsecond pointing stability mission requirement in the post-SM3B era.
Wang, Qiang; Liu, Yuefei; Chen, Yiqiang; Ma, Jing; Tan, Liying; Yu, Siyuan
2017-03-01
Accurate location computation for a beacon is an important factor of the reliability of satellite optical communications. However, location precision is generally limited by the resolution of CCD. How to improve the location precision of a beacon is an important and urgent issue. In this paper, we present two precise centroid computation methods for locating a beacon in satellite optical communications. First, in terms of its characteristics, the beacon is divided into several parts according to the gray gradients. Afterward, different numbers of interpolation points and different interpolation methods are applied in the interpolation area; we calculate the centroid position after interpolation and choose the best strategy according to the algorithm. The method is called a "gradient segmentation interpolation approach," or simply, a GSI (gradient segmentation interpolation) algorithm. To take full advantage of the pixels of the beacon's central portion, we also present an improved segmentation square weighting (SSW) algorithm, whose effectiveness is verified by the simulation experiment. Finally, an experiment is established to verify GSI and SSW algorithms. The results indicate that GSI and SSW algorithms can improve locating accuracy over that calculated by a traditional gray centroid method. These approaches help to greatly improve the location precision for a beacon in satellite optical communications.
NASA Astrophysics Data System (ADS)
Odinokov, A. V.; Leontyev, I. V.; Basilevsky, M. V.; Petrov, N. Ch.
2011-01-01
Potentials of mean force (PMF) are calculated for two model ion pairs in two non-aqueous solvents. Standard non-polarizable molecular dynamics simulation (NPMD) and approximate polarizable simulation (PMD) are implemented and compared as tools for monitoring PMF profiles. For the polar solvent (dimethylsulfoxide, DMSO) the PMF generated in terms of the NPMD reproduces fairly well the refined PMD-PMF profile. For the non-polar solvent (benzene) the conventional NPMD computation proves to be deficient. The validity of the correction found in terms of the approximate PMD approach is verified by its comparison with the result of the explicit PMD computation in benzene. The shapes of the PMF profiles in DMSO and in benzene are quite different. In DMSO, owing to dielectric screening, the PMF presents a flat plot with a shallow minimum positioned in the vicinity of the van der Waals contact of the ion pair. For the benzene case, the observed minimum proves to be unexpectedly deep, which manifests the formation of a tightly-binded contact ion pair. This remarkable effect arises owing to the strong electrostatic interaction that is incompletely screened by a non-polar medium. The PMFs for the binary benzene/DMSO mixtures display intermediate behaviour depending on the DMSO content.
Seedorf, Jens
2013-09-01
Livestock operations are under increasing pressure to fulfil minimum environmental requirements and avoid polluting the atmosphere. In regions with high farm animal densities, new farm buildings receive building permission only when biological exhaust air treatment systems (BEATS) are in place, such as biofilters. However, it is currently unknown whether BEATS can harbour pathogens such as zoonotic agents, which are potentially emitted via the purified gas. Because BEATS are located very close to the livestock building, it is assumed that BEATS-related microorganisms are aerially transported to farm animals via the inlet system of the ventilation system. To support this hypothesis, a computer simulation was applied to calculate the wind field around a facility consisting of a virtual livestock house and an adjacent biofilter. Under the chosen wind conditions (speed and direction), it can be shown that turbulences and eddies may occur in the near surrounding of a livestock building with an adjacent biofilter. Consequently, this might cause the entry of the released biofilter's purified gas into the barn, including possible microorganisms within this purified gas. If field investigations verify the results of the simulations, counter-measures must be taken to ensure biosecurity on farms with BEATS. © 2013 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Wang, Futong; Tao, Xiaxin; Xie, Lili; Raj, Siddharthan
2017-04-01
This study proposes a Green's function, an essential representation of water-saturated ground under moving excitation, to simulate ground borne vibration from trains. First, general solutions to the governing equations of poroelastic medium are derived by means of integral transform. Secondly, the transmission and reflection matrix approach is used to formulate the relationship between displacement and stress of the stratified ground, which results in the matrix of the Green's function. Then the Green's function is combined into a train-track-ground model, and is verified by typical examples and a field test. Additional simulations show that the computed ground vibration attenuates faster in the immediate vicinity of the track than in the surrounding area. The wavelength of wheel-rail unevenness has a notable effect on computed displacement and pore pressure. The variation of vibration intensity with the depth of ground is significantly influenced by the layering of the strata soil. When the train speed is equal to the velocity of the Rayleigh wave, the Mach cone appears in the simulated wave field. The proposed Green's function is an appropriate representation for a layered ground with shallow ground water table, and will be helpful to understand the dynamic responses of the ground to complicated moving excitation.
Recirculating, passive micromixer with a novel sawtooth structure.
Nichols, Kevin P; Ferullo, Julia R; Baeumner, Antje J
2006-02-01
A microfluidic device capable of recirculating nano to microlitre volumes in order to efficiently mix solutions is described. The device consists of molded polydimethyl siloxane (PDMS) channels with pressure inlet and outlet holes sealed by a glass lid. Recirculation is accomplished by a repeatedly reciprocated flow over an iterated sawtooth structure. The sawtooth structure serves to change the fluid velocity of individual streamlines differently depending on whether the fluid is flowing backwards or forward over the structure. Thus, individual streamlines can be accelerated or decelerated relative to the other streamlines to allow sections of the fluid to interact that would normally be linearly separated. Low Reynolds numbers imply that the process is reversible, neglecting diffusion. Computer simulations were carried out using FLUENT. Subsequently, fluorescent indicators were employed to experimentally verify these numerical simulations of the recirculation principal. Finally, mixing of a carboxyfluorescein labeled DMSO plug with an unlabeled DMSO plug across an immiscible hydrocarbon plug was investigated. At cycling rates of 1 Hz across five sawtooth units, the time was recorded to reach steady state in the channels, i.e. until both DMSO plugs had the same fluorescence intensity. In the case of the sawtooth structures, steady state was reached five times faster than in channels without sawtooth structures, which verified what would be expected based on numerical simulations. The microfluidic mixer is unique due to its versatility with respect to scaling, its potential to also mix solutions containing small particles such as beads and cells, and its ease of fabrication and use.
NASA Technical Reports Server (NTRS)
Drozda, Tomasz G.; Axdahl, Erik L.; Cabell, Karen F.
2014-01-01
With the increasing costs of physics experiments and simultaneous increase in availability and maturity of computational tools it is not surprising that computational fluid dynamics (CFD) is playing an increasingly important role, not only in post-test investigations, but also in the early stages of experimental planning. This paper describes a CFD-based effort executed in close collaboration between computational fluid dynamicists and experimentalists to develop a virtual experiment during the early planning stages of the Enhanced Injection and Mixing project at NASA Langley Research Center. This projects aims to investigate supersonic combustion ramjet (scramjet) fuel injection and mixing physics, improve the understanding of underlying physical processes, and develop enhancement strategies and functional relationships relevant to flight Mach numbers greater than 8. The purpose of the virtual experiment was to provide flow field data to aid in the design of the experimental apparatus and the in-stream rake probes, to verify the nonintrusive measurements based on NO-PLIF, and to perform pre-test analysis of quantities obtainable from the experiment and CFD. The approach also allowed for the joint team to develop common data processing and analysis tools, and to test research ideas. The virtual experiment consisted of a series of Reynolds-averaged simulations (RAS). These simulations included the facility nozzle, the experimental apparatus with a baseline strut injector, and the test cabin. Pure helium and helium-air mixtures were used to determine the efficacy of different inert gases to model hydrogen injection. The results of the simulations were analyzed by computing mixing efficiency, total pressure recovery, and stream thrust potential. As the experimental effort progresses, the simulation results will be compared with the experimental data to calibrate the modeling constants present in the CFD and validate simulation fidelity. CFD will also be used to investigate different injector concepts, improve understanding of the flow structure and flow physics, and develop functional relationships. Both RAS and large eddy simulations (LES) are planned for post-test analysis of the experimental data.
NASA Astrophysics Data System (ADS)
Xie, Beibei; Kong, Lingfu; Kong, Deming; Kong, Weihang; Li, Lei; Liu, Xingbin; Chen, Jiliang
2017-11-01
In order to accurately measure the flow rate under the low yield horizontal well conditions, an auto-cumulative flowmeter (ACF) was proposed. Using the proposed flowmeter, the oil flow rate in horizontal oil-water two-phase segregated flow can be finely extracted. The computational fluid dynamics software Fluent was used to simulate the fluid of the ACF in oil-water two-phase flow. In order to calibrate the simulation measurement of the ACF, a novel oil flow rate measurement method was further proposed. The models of the ACF were simulated to obtain and calibrate the oil flow rate under different total flow rates and oil cuts. Using the finite-element method, the structure of the seven conductance probes in the ACF was simulated. The response values for the probes of the ACF under the conditions of oil-water segregated flow were obtained. The experiments for oil-water segregated flow under different heights of the oil accumulation in horizontal oil-water two-phase flow were carried out to calibrate the ACF. The validity of the oil flow rate measurement in horizontal oil-water two-phase flow was verified by simulation and experimental results.
Xie, Beibei; Kong, Lingfu; Kong, Deming; Kong, Weihang; Li, Lei; Liu, Xingbin; Chen, Jiliang
2017-11-01
In order to accurately measure the flow rate under the low yield horizontal well conditions, an auto-cumulative flowmeter (ACF) was proposed. Using the proposed flowmeter, the oil flow rate in horizontal oil-water two-phase segregated flow can be finely extracted. The computational fluid dynamics software Fluent was used to simulate the fluid of the ACF in oil-water two-phase flow. In order to calibrate the simulation measurement of the ACF, a novel oil flow rate measurement method was further proposed. The models of the ACF were simulated to obtain and calibrate the oil flow rate under different total flow rates and oil cuts. Using the finite-element method, the structure of the seven conductance probes in the ACF was simulated. The response values for the probes of the ACF under the conditions of oil-water segregated flow were obtained. The experiments for oil-water segregated flow under different heights of the oil accumulation in horizontal oil-water two-phase flow were carried out to calibrate the ACF. The validity of the oil flow rate measurement in horizontal oil-water two-phase flow was verified by simulation and experimental results.
Ramireddygari, S.R.; Sophocleous, M.A.; Koelliker, J.K.; Perkins, S.P.; Govindaraju, R.S.
2000-01-01
This paper presents the results of a comprehensive modeling study of surface and groundwater systems, including stream-aquifer interactions, for the Wet Walnut Creek Watershed in west-central Kansas. The main objective of this study was to assess the impacts of watershed structures and irrigation water use on streamflow and groundwater levels, which in turn affect availability of water for the Cheyenne Bottoms Wildlife Refuge Management area. The surface-water flow model, POTYLDR, and the groundwater flow model, MODFLOW, were combined into an integrated, watershed-scale, continuous simulation model. Major revisions and enhancements were made to the POTYLDR and MODFLOW models for simulating the detailed hydrologic budget for the Wet Walnut Creek Watershed. The computer simulation model was calibrated and verified using historical streamflow records (at Albert and Nekoma gaging stations), reported irrigation water use, observed water-level elevations in watershed structure pools, and groundwater levels in the alluvial aquifer system. To assess the impact of watershed structures and irrigation water use on streamflow and groundwater levels, a number of hypothetical management scenarios were simulated under various operational criteria for watershed structures and different annual limits on water use for irrigation. A standard 'base case' was defined to allow comparative analysis of the results of different scenarios. The simulated streamflows showed that watershed structures decrease both streamflows and groundwater levels in the watershed. The amount of water used for irrigation has a substantial effect on the total simulated streamflow and groundwater levels, indicating that irrigation is a major budget item for managing water resources in the watershed. (C) 2000 Elsevier Science B.V.This paper presents the results of a comprehensive modeling study of surface and groundwater systems, including stream-aquifer interactions, for the Wet Walnut Creek Watershed in west-central Kansas. The main objective of this study was to assess the impacts of watershed structures and irrigation water use on streamflow and groundwater levels, which in turn affect availability of water for the Cheyenne Bottoms Wildlife Refuge Management area. The surface-water flow model, POTYLDR, and the groundwater flow model, MODFLOW, were combined into an integrated, watershed-scale, continuous simulation model. Major revisions and enhancements were made to the POTYLDR and MODFLOW models for simulating the detailed hydrologic budget for the Wet Walnut Creek Watershed. The computer simulation model was calibrated and verified using historical streamflow records (at Albert and Nekoma gaging stations), reported irrigation water use, observed water-level elevations in watershed structure pools, and groundwater levels in the alluvial aquifer system. To assess the impact of watershed structures and irrigation water use on streamflow and groundwater levels, a number of hypothetical management scenarios were simulated under various operational criteria for watershed structures and different annual limits on water use for irrigation. A standard `base case' was defined to allow comparative analysis of the results of different scenarios. The simulated streamflows showed that watershed structures decrease both streamflows and groundwater levels in the watershed. The amount of water used for irrigation has a substantial effect on the total simulated streamflow and groundwater levels, indicating that irrigation is a major budget item for managing water resources in the watershed.A comprehensive simulation model that combines the surface water flow model POTYLDR and the groundwater flow model MODFLOW was used to study the impacts of watershed structures (e.g., dams) and irrigation water use (including stream-aquifer interactions) on streamflow and groundwater. The model was revised, enhanced, calibrated, and verified, then applied to evaluate the hydrologic budget for Wet Wal
NASA Astrophysics Data System (ADS)
To, Albert C.; Liu, Wing Kam; Olson, Gregory B.; Belytschko, Ted; Chen, Wei; Shephard, Mark S.; Chung, Yip-Wah; Ghanem, Roger; Voorhees, Peter W.; Seidman, David N.; Wolverton, Chris; Chen, J. S.; Moran, Brian; Freeman, Arthur J.; Tian, Rong; Luo, Xiaojuan; Lautenschlager, Eric; Challoner, A. Dorian
2008-09-01
Microsystems have become an integral part of our lives and can be found in homeland security, medical science, aerospace applications and beyond. Many critical microsystem applications are in harsh environments, in which long-term reliability needs to be guaranteed and repair is not feasible. For example, gyroscope microsystems on satellites need to function for over 20 years under severe radiation, thermal cycling, and shock loading. Hence a predictive-science-based, verified and validated computational models and algorithms to predict the performance and materials integrity of microsystems in these situations is needed. Confidence in these predictions is improved by quantifying uncertainties and approximation errors. With no full system testing and limited sub-system testings, petascale computing is certainly necessary to span both time and space scales and to reduce the uncertainty in the prediction of long-term reliability. This paper presents the necessary steps to develop predictive-science-based multiscale modeling and simulation system. The development of this system will be focused on the prediction of the long-term performance of a gyroscope microsystem. The environmental effects to be considered include radiation, thermo-mechanical cycling and shock. Since there will be many material performance issues, attention is restricted to creep resulting from thermal aging and radiation-enhanced mass diffusion, material instability due to radiation and thermo-mechanical cycling and damage and fracture due to shock. To meet these challenges, we aim to develop an integrated multiscale software analysis system that spans the length scales from the atomistic scale to the scale of the device. The proposed software system will include molecular mechanics, phase field evolution, micromechanics and continuum mechanics software, and the state-of-the-art model identification strategies where atomistic properties are calibrated by quantum calculations. We aim to predict the long-term (in excess of 20 years) integrity of the resonator, electrode base, multilayer metallic bonding pads, and vacuum seals in a prescribed mission. Although multiscale simulations are efficient in the sense that they focus the most computationally intensive models and methods on only the portions of the space time domain needed, the execution of the multiscale simulations associated with evaluating materials and device integrity for aerospace microsystems will require the application of petascale computing. A component-based software strategy will be used in the development of our massively parallel multiscale simulation system. This approach will allow us to take full advantage of existing single scale modeling components. An extensive, pervasive thrust in the software system development is verification, validation, and uncertainty quantification (UQ). Each component and the integrated software system need to be carefully verified. An UQ methodology that determines the quality of predictive information available from experimental measurements and packages the information in a form suitable for UQ at various scales needs to be developed. Experiments to validate the model at the nanoscale, microscale, and macroscale are proposed. The development of a petascale predictive-science-based multiscale modeling and simulation system will advance the field of predictive multiscale science so that it can be used to reliably analyze problems of unprecedented complexity, where limited testing resources can be adequately replaced by petascale computational power, advanced verification, validation, and UQ methodologies.
A 3DHZETRN Code in a Spherical Uniform Sphere with Monte Carlo Verification
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2014-01-01
The computationally efficient HZETRN code has been used in recent trade studies for lunar and Martian exploration and is currently being used in the engineering development of the next generation of space vehicles, habitats, and extra vehicular activity equipment. A new version (3DHZETRN) capable of transporting High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation is under development. In the present report, new algorithms for light ion and neutron propagation with well-defined convergence criteria in 3D objects is developed and tested against Monte Carlo simulations to verify the solution methodology. The code will be available through the software system, OLTARIS, for shield design and validation and provides a basis for personal computer software capable of space shield analysis and optimization.
NASA Astrophysics Data System (ADS)
Zhang, Ju; Jackson, Thomas; Balachandar, Sivaramakrishnan
2015-06-01
We will develop a computational model built upon our verified and validated in-house SDT code to provide improved description of the multiphase blast wave dynamics where solid particles are considered deformable and can even undergo phase transitions. Our SDT computational framework includes a reactive compressible flow solver with sophisticated material interface tracking capability and realistic equation of state (EOS) such as Mie-Gruneisen EOS for multiphase flow modeling. The behavior of diffuse interface models by Shukla et al. (2010) and Tiwari et al. (2013) at different shock impedance ratio will be first examined and characterized. The recent constrained interface reinitialization by Shukla (2014) will then be developed to examine if conservation property can be improved. This work was supported in part by the U.S. Department of Energy and by the Defense Threat Reduction Agency.
Cryptanalysis and security enhancement of optical cryptography based on computational ghost imaging
NASA Astrophysics Data System (ADS)
Yuan, Sheng; Yao, Jianbin; Liu, Xuemei; Zhou, Xin; Li, Zhongyang
2016-04-01
Optical cryptography based on computational ghost imaging (CGI) has attracted much attention of researchers because it encrypts plaintext into a random intensity vector rather than complexed-valued function. This promising feature of the CGI-based cryptography reduces the amount of data to be transmitted and stored and therefore brings convenience in practice. However, we find that this cryptography is vulnerable to chosen-plaintext attack because of the linear relationship between the input and output of the encryption system, and three feasible strategies are proposed to break it in this paper. Even though a large number of plaintexts need to be chosen in these attack methods, it means that this cryptography still exists security risks. To avoid these attacks, a security enhancement method utilizing an invertible matrix modulation is further discussed and the feasibility is verified by numerical simulations.
Applications of intelligent computer-aided training
NASA Technical Reports Server (NTRS)
Loftin, R. B.; Savely, Robert T.
1991-01-01
Intelligent computer-aided training (ICAT) systems simulate the behavior of an experienced instructor observing a trainee, responding to help requests, diagnosing and remedying trainee errors, and proposing challenging new training scenarios. This paper presents a generic ICAT architecture that supports the efficient development of ICAT systems for varied tasks. In addition, details of ICAT projects, built with this architecture, that deliver specific training for Space Shuttle crew members, ground support personnel, and flight controllers are presented. Concurrently with the creation of specific ICAT applications, a general-purpose software development environment for ICAT systems is being built. The widespread use of such systems for both ground-based and on-orbit training will serve to preserve task and training expertise, support the training of large numbers of personnel in a distributed manner, and ensure the uniformity and verifiability of training experiences.
Tomography and generative training with quantum Boltzmann machines
NASA Astrophysics Data System (ADS)
Kieferová, Mária; Wiebe, Nathan
2017-12-01
The promise of quantum neural nets, which utilize quantum effects to model complex data sets, has made their development an aspirational goal for quantum machine learning and quantum computing in general. Here we provide methods of training quantum Boltzmann machines. Our work generalizes existing methods and provides additional approaches for training quantum neural networks that compare favorably to existing methods. We further demonstrate that quantum Boltzmann machines enable a form of partial quantum state tomography that further provides a generative model for the input quantum state. Classical Boltzmann machines are incapable of this. This verifies the long-conjectured connection between tomography and quantum machine learning. Finally, we prove that classical computers cannot simulate our training process in general unless BQP=BPP , provide lower bounds on the complexity of the training procedures and numerically investigate training for small nonstoquastic Hamiltonians.
Addition of simultaneous heat and solute transport and variable fluid viscosity to SEAWAT
Thorne, D.; Langevin, C.D.; Sukop, M.C.
2006-01-01
SEAWAT is a finite-difference computer code designed to simulate coupled variable-density ground water flow and solute transport. This paper describes a new version of SEAWAT that adds the ability to simultaneously model energy and solute transport. This is necessary for simulating the transport of heat and salinity in coastal aquifers for example. This work extends the equation of state for fluid density to vary as a function of temperature and/or solute concentration. The program has also been modified to represent the effects of variable fluid viscosity as a function of temperature and/or concentration. The viscosity mechanism is verified against an analytical solution, and a test of temperature-dependent viscosity is provided. Finally, the classic Henry-Hilleke problem is solved with the new code. ?? 2006 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ghosh, B.; Hazra, S.; Haldar, N.; Roy, D.; Patra, S. N.; Swarnakar, J.; Sarkar, P. P.; Mukhopadhyay, S.
2018-03-01
Since last few decades optics has already proved its strong potentiality for conducting parallel logic, arithmetic and algebraic operations due to its super-fast speed in communication and computation. So many different logical and sequential operations using all optical frequency encoding technique have been proposed by several authors. Here, we have keened out all optical dibit representation technique, which has the advantages of high speed operation as well as reducing the bit error problem. Exploiting this phenomenon, we have proposed all optical frequency encoded dibit based XOR and XNOR logic gates using the optical switches like add/drop multiplexer (ADM) and reflected semiconductor optical amplifier (RSOA). Also the operations of these gates have been verified through proper simulation using MATLAB (R2008a).
Planck 2015 results: XII. Full focal plane simulations
Ade, P. A. R.; Aghanim, N.; Arnaud, M.; ...
2016-09-20
In this paper, we present the 8th full focal plane simulation set (FFP8), deployed in support of the Planck 2015 results. FFP8 consists of 10 fiducial mission realizations reduced to 18 144 maps, together with the most massive suite of Monte Carlo realizations of instrument noise and CMB ever generated, comprising 10 4 mission realizations reduced to about 10 6 maps. The resulting maps incorporate the dominant instrumental, scanning, and data analysis effects, and the remaining subdominant effects will be included in future updates. Finally, generated at a cost of some 25 million CPU-hours spread across multiple high-performance-computing (HPC) platforms,more » FFP8 is used to validate and verify analysis algorithms and their implementations, and to remove biases from and quantify uncertainties in the results of analyses of the real data.« less
NASA Technical Reports Server (NTRS)
Credeur, Leonard; Houck, Jacob A.; Capron, William R.; Lohr, Gary W.
1990-01-01
A description and results are presented of a study to measure the performance and reaction of airline flight crews, in a full workload DC-9 cockpit, flying in a real-time simulation of an air traffic control (ATC) concept called Traffic Intelligence for the Management of Efficient Runway-scheduling (TIMER). Experimental objectives were to verify earlier fast-time TIMER time-delivery precision results and obtain data for the validation or refinement of existing computer models of pilot/airborne performance. Experimental data indicated a runway threshold, interarrival-time-error standard deviation in the range of 10.4 to 14.1 seconds. Other real-time system performance parameters measured include approach speeds, response time to controller turn instructions, bank angles employed, and ATC controller message delivery-time errors.
Witnessing eigenstates for quantum simulation of Hamiltonian spectra
Santagati, Raffaele; Wang, Jianwei; Gentile, Antonio A.; Paesani, Stefano; Wiebe, Nathan; McClean, Jarrod R.; Morley-Short, Sam; Shadbolt, Peter J.; Bonneau, Damien; Silverstone, Joshua W.; Tew, David P.; Zhou, Xiaoqi; O’Brien, Jeremy L.; Thompson, Mark G.
2018-01-01
The efficient calculation of Hamiltonian spectra, a problem often intractable on classical machines, can find application in many fields, from physics to chemistry. We introduce the concept of an “eigenstate witness” and, through it, provide a new quantum approach that combines variational methods and phase estimation to approximate eigenvalues for both ground and excited states. This protocol is experimentally verified on a programmable silicon quantum photonic chip, a mass-manufacturable platform, which embeds entangled state generation, arbitrary controlled unitary operations, and projective measurements. Both ground and excited states are experimentally found with fidelities >99%, and their eigenvalues are estimated with 32 bits of precision. We also investigate and discuss the scalability of the approach and study its performance through numerical simulations of more complex Hamiltonians. This result shows promising progress toward quantum chemistry on quantum computers. PMID:29387796
Frequency-domain-independent vector analysis for mode-division multiplexed transmission
NASA Astrophysics Data System (ADS)
Liu, Yunhe; Hu, Guijun; Li, Jiao
2018-04-01
In this paper, we propose a demultiplexing method based on frequency-domain independent vector analysis (FD-IVA) algorithm for mode-division multiplexing (MDM) system. FD-IVA extends frequency-domain independent component analysis (FD-ICA) from unitary variable to multivariate variables, and provides an efficient method to eliminate the permutation ambiguity. In order to verify the performance of FD-IVA algorithm, a 6 ×6 MDM system is simulated. The simulation results show that the FD-IVA algorithm has basically the same bit-error-rate(BER) performance with the FD-ICA algorithm and frequency-domain least mean squares (FD-LMS) algorithm. Meanwhile, the convergence speed of FD-IVA algorithm is the same as that of FD-ICA. However, compared with the FD-ICA and the FD-LMS, the FD-IVA has an obviously lower computational complexity.
End-to-End QoS for Differentiated Services and ATM Internetworking
NASA Technical Reports Server (NTRS)
Su, Hongjun; Atiquzzaman, Mohammed
2001-01-01
The Internet was initially design for non real-time data communications and hence does not provide any Quality of Service (QoS). The next generation Internet will be characterized by high speed and QoS guarantee. The aim of this paper is to develop a prioritized early packet discard (PEPD) scheme for ATM switches to provide service differentiation and QoS guarantee to end applications running over next generation Internet. The proposed PEPD scheme differs from previous schemes by taking into account the priority of packets generated from different application. We develop a Markov chain model for the proposed scheme and verify the model with simulation. Numerical results show that the results from the model and computer simulation are in close agreement. Our PEPD scheme provides service differentiation to the end-to-end applications.
Type Theory, Computation and Interactive Theorem Proving
2015-09-01
postdoc Cody Roux, to develop new methods of verifying real-valued inequalities automatically. They developed a prototype implementation in Python [8] (an...he has developed new heuristic, geometric methods of verifying real-valued inequalities. A python -based implementation has performed surprisingly...express complex mathematical and computational assertions. In this project, Avigad and Harper developed type-theoretic algorithms and formalisms that
An Evolutionary Optimization of the Refueling Simulation for a CANDU Reactor
NASA Astrophysics Data System (ADS)
Do, Q. B.; Choi, H.; Roh, G. H.
2006-10-01
This paper presents a multi-cycle and multi-objective optimization method for the refueling simulation of a 713 MWe Canada deuterium uranium (CANDU-6) reactor based on a genetic algorithm, an elitism strategy and a heuristic rule. The proposed algorithm searches for the optimal refueling patterns for a single cycle that maximizes the average discharge burnup, minimizes the maximum channel power and minimizes the change in the zone controller unit water fills while satisfying the most important safety-related neutronic parameters of the reactor core. The heuristic rule generates an initial population of individuals very close to a feasible solution and it reduces the computing time of the optimization process. The multi-cycle optimization is carried out based on a single cycle refueling simulation. The proposed approach was verified by a refueling simulation of a natural uranium CANDU-6 reactor for an operation period of 6 months at an equilibrium state and compared with the experience-based automatic refueling simulation and the generalized perturbation theory. The comparison has shown that the simulation results are consistent from each other and the proposed approach is a reasonable optimization method of the refueling simulation that controls all the safety-related parameters of the reactor core during the simulation
Bethge, Anja; Schumacher, Udo
2017-01-01
Background Tumor vasculature is critical for tumor growth, formation of distant metastases and efficiency of radio- and chemotherapy treatments. However, how the vasculature itself is affected during cancer treatment regarding to the metastatic behavior has not been thoroughly investigated. Therefore, the aim of this study was to analyze the influence of hypofractionated radiotherapy and cisplatin chemotherapy on vessel tree geometry and metastasis formation in a small cell lung cancer xenograft mouse tumor model to investigate the spread of malignant cells during different treatments modalities. Methods The biological data gained during these experiments were fed into our previously developed computer model “Cancer and Treatment Simulation Tool” (CaTSiT) to model the growth of the primary tumor, its metastatic deposit and also the influence on different therapies. Furthermore, we performed quantitative histology analyses to verify our predictions in xenograft mouse tumor model. Results According to the computer simulation the number of cells engrafting must vary considerably to explain the different weights of the primary tumor at the end of the experiment. Once a primary tumor is established, the fractal dimension of its vasculature correlates with the tumor size. Furthermore, the fractal dimension of the tumor vasculature changes during treatment, indicating that the therapy affects the blood vessels’ geometry. We corroborated these findings with a quantitative histological analysis showing that the blood vessel density is depleted during radiotherapy and cisplatin chemotherapy. The CaTSiT computer model reveals that chemotherapy influences the tumor’s therapeutic susceptibility and its metastatic spreading behavior. Conclusion Using a system biological approach in combination with xenograft models and computer simulations revealed that the usage of chemotherapy and radiation therapy determines the spreading behavior by changing the blood vessel geometry of the primary tumor. PMID:29107953
NASA Astrophysics Data System (ADS)
Wang, N.; Shen, Y.; Yang, D.; Bao, X.; Li, J.; Zhang, W.
2017-12-01
Accurate and efficient forward modeling methods are important for high resolution full waveform inversion. Compared with the elastic case, solving anelastic wave equation requires more computational time, because of the need to compute additional material-independent anelastic functions. A numerical scheme with a large Courant-Friedrichs-Lewy (CFL) condition number enables us to use a large time step to simulate wave propagation, which improves computational efficiency. In this work, we apply the fourth-order strong stability preserving Runge-Kutta method with an optimal CFL coeffiecient to solve the anelastic wave equation. We use a fourth order DRP/opt MacCormack scheme for the spatial discretization, and we approximate the rheological behaviors of the Earth by using the generalized Maxwell body model. With a larger CFL condition number, we find that the computational efficient is significantly improved compared with the traditional fourth-order Runge-Kutta method. Then, we apply the scattering-integral method for calculating travel time and amplitude sensitivity kernels with respect to velocity and attenuation structures. For each source, we carry out one forward simulation and save the time-dependent strain tensor. For each station, we carry out three `backward' simulations for the three components and save the corresponding strain tensors. The sensitivity kernels at each point in the medium are the convolution of the two sets of the strain tensors. Finally, we show several synthetic tests to verify the effectiveness of the strong stability preserving Runge-Kutta method in generating accurate synthetics in full waveform modeling, and in generating accurate strain tensors for calculating sensitivity kernels at regional and global scales.
NASA Astrophysics Data System (ADS)
Zahedi, Sulmaz
This study aims to prove the feasibility of using Ultrasound-Guided High Intensity Focused Ultrasound (USg-HIFU) to create thermal lesions in neurosurgical applications, allowing for precise ablation of brain tissue, while simultaneously providing real time imaging. To test the feasibility of the system, an optically transparent HIFU compatible tissue-mimicking phantom model was produced. USg-HIFU was then used for ablation of the phantom, with and without targets. Finally, ex vivo lamb brain tissue was imaged and ablated using the USg-HIFU system. Real-time ultrasound images and videos obtained throughout the ablation process showing clear lesion formation at the focal point of the HIFU transducer. Post-ablation gross and histopathology examinations were conducted to verify thermal and mechanical damage in the ex vivo lamb brain tissue. Finally, thermocouple readings were obtained, and HIFU field computer simulations were conducted to verify findings. Results of the study concluded reproducibility of USg-HIFU thermal lesions for neurosurgical applications.
NASA Astrophysics Data System (ADS)
Chen, Biao; Jing, Zhenxue; Smith, Andrew P.; Parikh, Samir; Parisky, Yuri
2006-03-01
Dual-energy contrast enhanced digital mammography (DE-CEDM), which is based upon the digital subtraction of low/high-energy image pairs acquired before/after the administration of contrast agents, may provide physicians physiologic and morphologic information of breast lesions and help characterize their probability of malignancy. This paper proposes to use only one pair of post-contrast low / high-energy images to obtain digitally subtracted dual-energy contrast-enhanced images with an optimal weighting factor deduced from simulated characteristics of the imaging chain. Based upon our previous CEDM framework, quantitative characteristics of the materials and imaging components in the x-ray imaging chain, including x-ray tube (tungsten) spectrum, filters, breast tissues / lesions, contrast agents (non-ionized iodine solution), and selenium detector, were systemically modeled. Using the base-material (polyethylene-PMMA) decomposition method based on entrance low / high-energy x-ray spectra and breast thickness, the optimal weighting factor was calculated to cancel the contrast between fatty and glandular tissues while enhancing the contrast of iodized lesions. By contrast, previous work determined the optimal weighting factor through either a calibration step or through acquisition of a pre-contrast low/high-energy image pair. Computer simulations were conducted to determine weighting factors, lesions' contrast signal values, and dose levels as functions of x-ray techniques and breast thicknesses. Phantom and clinical feasibility studies were performed on a modified Selenia full field digital mammography system to verify the proposed method and computer-simulated results. The resultant conclusions from the computer simulations and phantom/clinical feasibility studies will be used in the upcoming clinical study.
Computational modeling of magnetic particle margination within blood flow through LAMMPS
NASA Astrophysics Data System (ADS)
Ye, Huilin; Shen, Zhiqiang; Li, Ying
2017-11-01
We develop a multiscale and multiphysics computational method to investigate the transport of magnetic particles as drug carriers in blood flow under influence of hydrodynamic interaction and external magnetic field. A hybrid coupling method is proposed to handle red blood cell (RBC)-fluid interface (CFI) and magnetic particle-fluid interface (PFI), respectively. Immersed boundary method (IBM)-based velocity coupling is used to account for CFI, which is validated by tank-treading and tumbling behaviors of a single RBC in simple shear flow. While PFI is captured by IBM-based force coupling, which is verified through movement of a single magnetic particle under non-uniform external magnetic field and breakup of a magnetic chain in rotating magnetic field. These two components are seamlessly integrated within the LAMMPS framework, which is a highly parallelized molecular dynamics solver. In addition, we also implement a parallelized lattice Boltzmann simulator within LAMMPS to handle the fluid flow simulation. Based on the proposed method, we explore the margination behaviors of magnetic particles and magnetic chains within blood flow. We find that the external magnetic field can be used to guide the motion of these magnetic materials and promote their margination to the vascular wall region. Moreover, the scaling performance and speedup test further confirm the high efficiency and robustness of proposed computational method. Therefore, it provides an efficient way to simulate the transport of nanoparticle-based drug carriers within blood flow in a large scale. The simulation results can be applied in the design of efficient drug delivery vehicles that optimally accumulate within diseased tissue, thus providing better imaging sensitivity, therapeutic efficacy and lower toxicity.
Scattered Dose Calculations and Measurements in a Life-Like Mouse Phantom
Welch, David; Turner, Leah; Speiser, Michael; Randers-Pehrson, Gerhard; Brenner, David J.
2017-01-01
Anatomically accurate phantoms are useful tools for radiation dosimetry studies. In this work, we demonstrate the construction of a new generation of life-like mouse phantoms in which the methods have been generalized to be applicable to the fabrication of any small animal. The mouse phantoms, with built-in density inhomogeneity, exhibit different scattering behavior dependent on where the radiation is delivered. Computer models of the mouse phantoms and a small animal irradiation platform were devised in Monte Carlo N-Particle code (MCNP). A baseline test replicating the irradiation system in a computational model shows minimal differences from experimental results from 50 Gy down to 0.1 Gy. We observe excellent agreement between scattered dose measurements and simulation results from X-ray irradiations focused at either the lung or the abdomen within our phantoms. This study demonstrates the utility of our mouse phantoms as measurement tools with the goal of using our phantoms to verify complex computational models. PMID:28140787
X-38 Experimental Controls Laws
NASA Technical Reports Server (NTRS)
Munday, Steve; Estes, Jay; Bordano, Aldo J.
2000-01-01
X-38 Experimental Control Laws X-38 is a NASA JSC/DFRC experimental flight test program developing a series of prototypes for an International Space Station (ISS) Crew Return Vehicle, often called an ISS "lifeboat." X- 38 Vehicle 132 Free Flight 3, currently scheduled for the end of this month, will be the first flight test of a modem FCS architecture called Multi-Application Control-Honeywell (MACH), originally developed by the Honeywell Technology Center. MACH wraps classical P&I outer attitude loops around a modem dynamic inversion attitude rate loop. The dynamic inversion process requires that the flight computer have an onboard aircraft model of expected vehicle dynamics based upon the aerodynamic database. Dynamic inversion is computationally intensive, so some timing modifications were made to implement MACH on the slower flight computers of the subsonic test vehicles. In addition to linear stability margin analyses and high fidelity 6-DOF simulation, hardware-in-the-loop testing is used to verify the implementation of MACH and its robustness to aerodynamic and environmental uncertainties and disturbances.
Computational analysis of unmanned aerial vehicle (UAV)
NASA Astrophysics Data System (ADS)
Abudarag, Sakhr; Yagoub, Rashid; Elfatih, Hassan; Filipovic, Zoran
2017-01-01
A computational analysis has been performed to verify the aerodynamics properties of Unmanned Aerial Vehicle (UAV). The UAV-SUST has been designed and fabricated at the Department of Aeronautical Engineering at Sudan University of Science and Technology in order to meet the specifications required for surveillance and reconnaissance mission. It is classified as a medium range and medium endurance UAV. A commercial CFD solver is used to simulate steady and unsteady aerodynamics characteristics of the entire UAV. In addition to Lift Coefficient (CL), Drag Coefficient (CD), Pitching Moment Coefficient (CM) and Yawing Moment Coefficient (CN), the pressure and velocity contours are illustrated. The aerodynamics parameters are represented a very good agreement with the design consideration at angle of attack ranging from zero to 26 degrees. Moreover, the visualization of the velocity field and static pressure contours is indicated a satisfactory agreement with the proposed design. The turbulence is predicted by enhancing K-ω SST turbulence model within the computational fluid dynamics code.
NASA Technical Reports Server (NTRS)
Kyle, R. G.
1972-01-01
Information transfer between the operator and computer-generated display systems is an area where the human factors engineer discovers little useful design data relating human performance to system effectiveness. This study utilized a computer-driven, cathode-ray-tube graphic display to quantify human response speed in a sequential information processing task. The performance criteria was response time to sixteen cell elements of a square matrix display. A stimulus signal instruction specified selected cell locations by both row and column identification. An equal probable number code, from one to four, was assigned at random to the sixteen cells of the matrix and correspondingly required one of four, matched keyed-response alternatives. The display format corresponded to a sequence of diagnostic system maintenance events, that enable the operator to verify prime system status, engage backup redundancy for failed subsystem components, and exercise alternate decision-making judgements. The experimental task bypassed the skilled decision-making element and computer processing time, in order to determine a lower bound on the basic response speed for given stimulus/response hardware arrangement.
NASA Astrophysics Data System (ADS)
Omura, Masaaki; Yoshida, Kenji; Akita, Shinsuke; Yamaguchi, Tadashi
2018-07-01
We aim to develop an ultrasonic tissue characterization method for the follow-up of healing ulcers by diagnosing collagen fibers properties. In this paper, we demonstrated a computer simulation with simulation phantoms reflecting irregularly distributed collagen fibers to evaluate the relationship between physical properties, such as number density and periodicity, and the estimated characteristics of the echo amplitude envelope using the homodyned-K distribution. Moreover, the consistency between echo signal characteristics and the structures of ex vivo human tissues was verified from the measured data of normal skin and nonhealed ulcers. In the simulation study, speckle or coherent signal characteristics are identified as periodically or uniformly distributed collagen fibers with high number density and high periodicity. This result shows the effectiveness of the analysis using the homodyned-K distribution for tissues with complicated structures. Normal skin analysis results are characterized as including speckle or low-coherence signal components, and a nonhealed ulcer is different from normal skin with respect to the physical properties of collagen fibers.
Design and implementation of an air-conditioning system with storage tank for load shifting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Y.Y.; Wu, C.J.; Liou, K.L.
1987-11-01
The experience with the design, simulation and implementation of an air-conditioning system with chilled water storage tank is presented in this paper. The system is used to shift air-conditioning load of residential and commercial buildings from on-peak to off-peak period. Demand-side load management can thus be achieved if many buildings are equipped with such storage devices. In the design of this system, a lumped-parameter circuit model is first employed to simulate the heat transfer within the air-conditioned building such that the required capacity of the storage tank can be figured out. Then, a set of desirable parameters for the temperaturemore » controller of the system are determined using the parameter plane method and the root locus method. The validity of the proposed mathematical model and design approach is verified by comparing the results obtained from field tests with those from the computer simulations. Cost-benefit analysis of the system is also discussed.« less
Prediction of pilot reserve attention capacity during air-to-air target tracking
NASA Technical Reports Server (NTRS)
Onstott, E. D.; Faulkner, W. H.
1977-01-01
Reserve attention capacity of a pilot was calculated using a pilot model that allocates exclusive model attention according to the ranking of task urgency functions whose variables are tracking error and error rate. The modeled task consisted of tracking a maneuvering target aircraft both vertically and horizontally, and when possible, performing a diverting side task which was simulated by the precise positioning of an electrical stylus and modeled as a task of constant urgency in the attention allocation algorithm. The urgency of the single loop vertical task is simply the magnitude of the vertical tracking error, while the multiloop horizontal task requires a nonlinear urgency measure of error and error rate terms. Comparison of model results with flight simulation data verified the computed model statistics of tracking error of both axes, lateral and longitudinal stick amplitude and rate, and side task episodes. Full data for the simulation tracking statistics as well as the explicit equations and structure of the urgency function multiaxis pilot model are presented.
Mittal, R.; Dong, H.; Bozkurttas, M.; Najjar, F.M.; Vargas, A.; von Loebbecke, A.
2010-01-01
A sharp interface immersed boundary method for simulating incompressible viscous flow past three-dimensional immersed bodies is described. The method employs a multi-dimensional ghost-cell methodology to satisfy the boundary conditions on the immersed boundary and the method is designed to handle highly complex three-dimensional, stationary, moving and/or deforming bodies. The complex immersed surfaces are represented by grids consisting of unstructured triangular elements; while the flow is computed on non-uniform Cartesian grids. The paper describes the salient features of the methodology with special emphasis on the immersed boundary treatment for stationary and moving boundaries. Simulations of a number of canonical two- and three-dimensional flows are used to verify the accuracy and fidelity of the solver over a range of Reynolds numbers. Flow past suddenly accelerated bodies are used to validate the solver for moving boundary problems. Finally two cases inspired from biology with highly complex three-dimensional bodies are simulated in order to demonstrate the versatility of the method. PMID:20216919
Quantum optical emulation of molecular vibronic spectroscopy using a trapped-ion device.
Shen, Yangchao; Lu, Yao; Zhang, Kuan; Zhang, Junhua; Zhang, Shuaining; Huh, Joonsuk; Kim, Kihwan
2018-01-28
Molecules are one of the most demanding quantum systems to be simulated by quantum computers due to their complexity and the emergent role of quantum nature. The recent theoretical proposal of Huh et al. (Nature Photon., 9, 615 (2015)) showed that a multi-photon network with a Gaussian input state can simulate a molecular spectroscopic process. Here, we present the first quantum device that generates a molecular spectroscopic signal with the phonons in a trapped ion system, using SO 2 as an example. In order to perform reliable Gaussian sampling, we develop the essential experimental technology with phonons, which includes the phase-coherent manipulation of displacement, squeezing, and rotation operations with multiple modes in a single realization. The required quantum optical operations are implemented through Raman laser beams. The molecular spectroscopic signal is reconstructed from the collective projection measurements for the two-phonon-mode. Our experimental demonstration will pave the way to large-scale molecular quantum simulations, which are classically intractable, but would be easily verifiable by real molecular spectroscopy.
NASA Astrophysics Data System (ADS)
Menicucci, D. F.
1986-01-01
The performance of a photovoltaic (PV) system is affected by its mounting configuration. The optimal configuration is unclear because of lack of experience and data. Sandia National Laboratories, Albuquerque (SNLA), has conducted a controlled field experiment to compare four types of the most common module mounting. The data from the experiment were used to verify the accuracy of PVFORM, a new computer program that simulates PV performance. PVFORM was then used to simulate the performance of identical PV modules on different mounting configurations at 10 sites throughout the US. This report describes the module mounting configurations, the experimental methods used, the specialized statistical techniques used in the analysis, and the final results of the effort. The module mounting configurations are rank ordered at each site according to their annual and seasonal energy production performance, and each is briefly discussed in terms of its advantages and disadvantages in various applications.
An analytical study of reduced-gravity propellant settling
NASA Technical Reports Server (NTRS)
Bradshaw, R. D.; Kramer, J. L.; Masica, W. J.
1974-01-01
Full-scale propellant reorientation flow dynamics for the Centaur D-1T fuel tank were analyzed. A computer code using the simplified marker and cell technique was modified to include the capability for a variable-grid mesh configuration. Use of smaller cells near the boundary, near baffles, and in corners provides improved flow resolution. Two drop tower model cases were simulated to verify program validity: one case without baffles, the other with baffles and geometry identical to Centaur D-1T. Flow phenomena using the new code successfully modeled drop tower data. Baffles are a positive factor in the settling flow. Two full-scale Centaur D-1T cases were simulated using parameters based on the Titan/Centaur proof flight. These flow simulations indicated the time to clear the vent area and an indication of time to orient and collect the propellant. The results further indicated the complexity of the reorientation flow and the long time period required for settling.
NASA Astrophysics Data System (ADS)
Sun, Haijun; Hu, Chunbo; Zhu, Xiaofei
2017-10-01
A numerical study of powder propellant pickup progress at high pressure was presented in this paper by using two-fluid model with kinetic theory of granular flow in the computational fluid dynamics software package ANSYS/Fluent. Simulations were conducted to evaluate the effects of initial pressure, initial powder packing rate and mean particle diameter on the flow characteristics in terms of velocity vector distribution, granular temperature, pressure drop, particle velocity and volume. The numerical results of pressure drop were also compared with experiments to verify the TFM model. The simulated results show that the pressure drop value increases as the initial pressure increases, and the granular temperature under the conditions of different initial pressures and packing rates is almost the same in the area of throttling orifice plate. While there is an appropriate value for particle size and packing rate to form a ;core-annulus; structure in powder box, and the time-averaged velocity vector distribution of solid phase is inordinate.
Computer-aided analysis of cutting processes for brittle materials
NASA Astrophysics Data System (ADS)
Ogorodnikov, A. I.; Tikhonov, I. N.
2017-12-01
This paper is focused on 3D computer simulation of cutting processes for brittle materials and silicon wafers. Computer-aided analysis of wafer scribing and dicing is carried out with the use of the ANSYS CAE (computer-aided engineering) software, and a parametric model of the processes is created by means of the internal ANSYS APDL programming language. Different types of tool tip geometry are analyzed to obtain internal stresses, such as a four-sided pyramid with an included angle of 120° and a tool inclination angle to the normal axis of 15°. The quality of the workpieces after cutting is studied by optical microscopy to verify the FE (finite-element) model. The disruption of the material structure during scribing occurs near the scratch and propagates into the wafer or over its surface at a short range. The deformation area along the scratch looks like a ragged band, but the stress width is rather low. The theory of cutting brittle semiconductor and optical materials is developed on the basis of the advanced theory of metal turning. The fall of stress intensity along the normal on the way from the tip point to the scribe line can be predicted using the developed theory and with the verified FE model. The crystal quality and dimensions of defects are determined by the mechanics of scratching, which depends on the shape of the diamond tip, the scratching direction, the velocity of the cutting tool and applied force loads. The disunity is a rate-sensitive process, and it depends on the cutting thickness. The application of numerical techniques, such as FE analysis, to cutting problems enhances understanding and promotes the further development of existing machining technologies.
Study of hypervelocity projectile impact on thick metal plates
Roy, Shawoon K.; Trabia, Mohamed; O’Toole, Brendan; ...
2016-01-01
Hypervelocity impacts generate extreme pressure and shock waves in impacted targets that undergo severe localized deformation within a few microseconds. These impact experiments pose unique challenges in terms of obtaining accurate measurements. Similarly, simulating these experiments is not straightforward. This paper proposed an approach to experimentally measure the velocity of the back surface of an A36 steel plate impacted by a projectile. All experiments used a combination of a two-stage light-gas gun and the photonic Doppler velocimetry (PDV) technique. The experimental data were used to benchmark and verify computational studies. Two different finite-element methods were used to simulate the experiments:more » Lagrangian-based smooth particle hydrodynamics (SPH) and Eulerian-based hydrocode. Both codes used the Johnson-Cook material model and the Mie-Grüneisen equation of state. Experiments and simulations were compared based on the physical damage area and the back surface velocity. Finally, the results of this study showed that the proposed simulation approaches could be used to reduce the need for expensive experiments.« less
Superhydrophobic surfaces: From nature to biomimetic through VOF simulation.
Liu, Chunbao; Zhu, Ling; Bu, Weiyang; Liang, Yunhong
2018-04-01
The contact angle, surface structure and chemical compositions of Canna leaves were investigated. According to the surface structure of Canna leaves which observed by Scanning Electron Microscopy(SEM), the CFD (Computational Fluid Dynamics)model was established and the method of volume of fluid (VOF) was used to simulate the process of droplet impacting on the surface and established a smooth surface for comparison to verify that the surface structure was an important factor of the superhydrophobic properties. Based on the study of Canna leaf and VOF simulation of its surface structure, the superhydrophobic samples were processed successfully and showed a good superhydrophobic property with a contact angle of 156 ± 1 degrees. A high-speed camera (5000 frames per second) was used to assess droplet movement and determine the contact time of the samples. The contact time for the sample was 13.1 ms. The results displayed that the artificial superhydrophobic surface is perfect for the performance of superhydrophobic properties. The VOF simulation method was efficient, accurate and low cost before machining artificial superhydrophobic samples. Copyright © 2018 Elsevier Ltd. All rights reserved.
THE VIRTUAL INSTRUMENT: SUPPORT FOR GRID-ENABLED MCELL SIMULATIONS
Casanova, Henri; Berman, Francine; Bartol, Thomas; Gokcay, Erhan; Sejnowski, Terry; Birnbaum, Adam; Dongarra, Jack; Miller, Michelle; Ellisman, Mark; Faerman, Marcio; Obertelli, Graziano; Wolski, Rich; Pomerantz, Stuart; Stiles, Joel
2010-01-01
Ensembles of widely distributed, heterogeneous resources, or Grids, have emerged as popular platforms for large-scale scientific applications. In this paper we present the Virtual Instrument project, which provides an integrated application execution environment that enables end-users to run and interact with running scientific simulations on Grids. This work is performed in the specific context of MCell, a computational biology application. While MCell provides the basis for running simulations, its capabilities are currently limited in terms of scale, ease-of-use, and interactivity. These limitations preclude usage scenarios that are critical for scientific advances. Our goal is to create a scientific “Virtual Instrument” from MCell by allowing its users to transparently access Grid resources while being able to steer running simulations. In this paper, we motivate the Virtual Instrument project and discuss a number of relevant issues and accomplishments in the area of Grid software development and application scheduling. We then describe our software design and report on the current implementation. We verify and evaluate our design via experiments with MCell on a real-world Grid testbed. PMID:20689618
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfeiffer, M., E-mail: mpfeiffer@irs.uni-stuttgart.de; Nizenkov, P., E-mail: nizenkov@irs.uni-stuttgart.de; Mirza, A., E-mail: mirza@irs.uni-stuttgart.de
2016-02-15
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methodsmore » are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.« less
The effect of dense gas dynamics on loss in ORC transonic turbines
NASA Astrophysics Data System (ADS)
Durá Galiana, FJ; Wheeler, APS; Ong, J.; Ventura, CA de M.
2017-03-01
This paper describes a number of recent investigations into the effect of dense gas dynamics on ORC transonic turbine performance. We describe a combination of experimental, analytical and computational studies which are used to determine how, in-particular, trailing-edge loss changes with choice of working fluid. A Ludwieg tube transient wind-tunnel is used to simulate a supersonic base flow which mimics an ORC turbine vane trailing-edge flow. Experimental measurements of wake profiles and trailing-edge base pressure with different working fluids are used to validate high-order CFD simulations. In order to capture the correct mixing in the base region, Large-Eddy Simulations (LES) are performed and verified against the experimental data by comparing the LES with different spatial and temporal resolutions. RANS and Detached-Eddy Simulation (DES) are also compared with experimental data. The effect of different modelling methods and working fluid on mixed-out loss is then determined. Current results point at LES predicting the closest agreement with experimental results, and dense gas effects are consistently predicted to increase loss.
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
NASA Astrophysics Data System (ADS)
Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.
2016-02-01
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.
Model-based verification and validation of the SMAP uplink processes
NASA Astrophysics Data System (ADS)
Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.
Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.
Blind Quantum Signature with Blind Quantum Computation
NASA Astrophysics Data System (ADS)
Li, Wei; Shi, Ronghua; Guo, Ying
2017-04-01
Blind quantum computation allows a client without quantum abilities to interact with a quantum server to perform a unconditional secure computing protocol, while protecting client's privacy. Motivated by confidentiality of blind quantum computation, a blind quantum signature scheme is designed with laconic structure. Different from the traditional signature schemes, the signing and verifying operations are performed through measurement-based quantum computation. Inputs of blind quantum computation are securely controlled with multi-qubit entangled states. The unique signature of the transmitted message is generated by the signer without leaking information in imperfect channels. Whereas, the receiver can verify the validity of the signature using the quantum matching algorithm. The security is guaranteed by entanglement of quantum system for blind quantum computation. It provides a potential practical application for e-commerce in the cloud computing and first-generation quantum computation.
Computer-Aided Resolution of an Experimental Paradox in Bacterial Chemotaxis
Abouhamad, Walid N.; Bray, Dennis; Schuster, Martin; Boesch, Kristin C.; Silversmith, Ruth E.; Bourret, Robert B.
1998-01-01
Escherichia coli responds to its environment by means of a network of intracellular reactions which process signals from membrane-bound receptors and relay them to the flagellar motors. Although characterization of the reactions in the chemotaxis signaling pathway is sufficiently complete to construct computer simulations that predict the phenotypes of mutant strains with a high degree of accuracy, two previous experimental investigations of the activity remaining upon genetic deletion of multiple signaling components yielded several contradictory results (M. P. Conley, A. J. Wolfe, D. F. Blair, and H. C. Berg, J. Bacteriol. 171:5190–5193, 1989; J. D. Liu and J. S. Parkinson, Proc. Natl. Acad. Sci. USA 86:8703–8707, 1989). For example, “building up” the pathway by adding back CheA and CheY to a gutted strain lacking chemotaxis genes resulted in counterclockwise flagellar rotation whereas “breaking down” the pathway by deleting chemotaxis genes except cheA and cheY resulted in alternating episodes of clockwise and counterclockwise flagellar rotation. Our computer simulation predicts that trace amounts of CheZ expressed in the gutted strain could account for this difference. We tested this explanation experimentally by constructing a mutant containing a new deletion of the che genes that cannot express CheZ and verified that the behavior of strains built up from the new deletion does in fact conform to both the phenotypes observed for breakdown strains and computer-generated predictions. Our findings consolidate the present view of the chemotaxis signaling pathway and highlight the utility of molecularly based computer models in the analysis of complex biochemical networks. PMID:9683468
NASA Astrophysics Data System (ADS)
Dang, Jie; Chen, Hao
2016-12-01
The methodology and procedures are discussed on designing merchant ships to achieve fully-integrated and optimized hull-propulsion systems by using asymmetric aftbodies. Computational fluid dynamics (CFD) has been used to evaluate the powering performance through massive calculations with automatic deformation algorisms for the hull forms and the propeller blades. Comparative model tests of the designs to the optimized symmetric hull forms have been carried out to verify the efficiency gain. More than 6% improvement on the propulsive efficiency of an oil tanker has been measured during the model tests. Dedicated sea-trials show good agreement with the predicted performance from the test results.
Comparison of OpenFOAM and EllipSys3D actuator line methods with (NEW) MEXICO results
NASA Astrophysics Data System (ADS)
Nathan, J.; Meyer Forsting, A. R.; Troldborg, N.; Masson, C.
2017-05-01
The Actuator Line Method exists for more than a decade and has become a well established choice for simulating wind rotors in computational fluid dynamics. Numerous implementations exist and are used in the wind energy research community. These codes were verified by experimental data such as the MEXICO experiment. Often the verification against other codes were made on a very broad scale. Therefore this study attempts first a validation by comparing two different implementations, namely an adapted version of SOWFA/OpenFOAM and EllipSys3D and also a verification by comparing against experimental results from the MEXICO and NEW MEXICO experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad, Israr, E-mail: iak-2000plus@yahoo.com; Saaban, Azizan Bin, E-mail: azizan.s@uum.edu.my; Ibrahim, Adyda Binti, E-mail: adyda@uum.edu.my
This paper addresses a comparative computational study on the synchronization quality, cost and converging speed for two pairs of identical chaotic and hyperchaotic systems with unknown time-varying parameters. It is assumed that the unknown time-varying parameters are bounded. Based on the Lyapunov stability theory and using the adaptive control method, a single proportional controller is proposed to achieve the goal of complete synchronizations. Accordingly, appropriate adaptive laws are designed to identify the unknown time-varying parameters. The designed control strategy is easy to implement in practice. Numerical simulations results are provided to verify the effectiveness of the proposed synchronization scheme.
Experimental evaluation of a wind shear alert and energy management display
NASA Technical Reports Server (NTRS)
Kraiss, K.-F.; Baty, D. L.
1978-01-01
A method is proposed for onboard measurement and display of specific windshear and energy management data derived from an air data computer. An open-loop simulation study is described which was carried out to verify the feasibility of this display concept, and whose results were used as a basis to develop the respective cockpit instrumentation. The task was to fly a three-degree landing approach under various shear conditions with and without specific information on the shear. Improved performance due to augmented cockpit information was observed. Critical shears with increasing tailwinds could be handled more consistently and with less deviation from the glide path.
a Global Registration Algorithm of the Single-Closed Ring Multi-Stations Point Cloud
NASA Astrophysics Data System (ADS)
Yang, R.; Pan, L.; Xiang, Z.; Zeng, H.
2018-04-01
Aimed at the global registration problem of the single-closed ring multi-stations point cloud, a formula in order to calculate the error of rotation matrix was constructed according to the definition of error. The global registration algorithm of multi-station point cloud was derived to minimize the error of rotation matrix. And fast-computing formulas of transformation matrix with whose implementation steps and simulation experiment scheme was given. Compared three different processing schemes of multi-station point cloud, the experimental results showed that the effectiveness of the new global registration method was verified, and it could effectively complete the global registration of point cloud.
Parallel logic gates in synthetic gene networks induced by non-Gaussian noise.
Xu, Yong; Jin, Xiaoqin; Zhang, Huiqing
2013-11-01
The recent idea of logical stochastic resonance is verified in synthetic gene networks induced by non-Gaussian noise. We realize the switching between two kinds of logic gates under optimal moderate noise intensity by varying two different tunable parameters in a single gene network. Furthermore, in order to obtain more logic operations, thus providing additional information processing capacity, we obtain in a two-dimensional toggle switch model two complementary logic gates and realize the transformation between two logic gates via the methods of changing different parameters. These simulated results contribute to improve the computational power and functionality of the networks.
Two-dimensional designed fabrication of subwavelength grating HCG mirror on silicon-on-insulator
NASA Astrophysics Data System (ADS)
Huang, Shen-Che; Hong, Kuo-Bin; Lu, Tien-Chang; He, Sailing
2016-03-01
We designed and fabricated a two dimensional high contrast subwavelength grating (HCG) mirrors. The computer-aided software was employed to verify the structural parameters including grating periods and filling factors. From the optimized simulation results, the designed HCG structure has a wide reflection stopband (reflectivity (R) >90%) of over 200 nm, which centered at telecommunication wavelength. The optimized HCG mirrors were fabricated by electron beam lithography and inductively coupled plasma process technique. The experimental result was almost consistent with calculated data. This achievement should have an impact on numerous photonic devices helpful attribution to the integrated HCG VCSELs in the future.
Sparsity-aware multiple relay selection in large multi-hop decode-and-forward relay networks
NASA Astrophysics Data System (ADS)
Gouissem, A.; Hamila, R.; Al-Dhahir, N.; Foufou, S.
2016-12-01
In this paper, we propose and investigate two novel techniques to perform multiple relay selection in large multi-hop decode-and-forward relay networks. The two proposed techniques exploit sparse signal recovery theory to select multiple relays using the orthogonal matching pursuit algorithm and outperform state-of-the-art techniques in terms of outage probability and computation complexity. To reduce the amount of collected channel state information (CSI), we propose a limited-feedback scheme where only a limited number of relays feedback their CSI. Furthermore, a detailed performance-complexity tradeoff investigation is conducted for the different studied techniques and verified by Monte Carlo simulations.
1989 IEEE Aerospace Applications Conference, Breckenridge, CO, Feb. 12-17, 1989, Conference Digest
NASA Astrophysics Data System (ADS)
Recent advances in electronic devices for aerospace applications are discussed in reviews and reports. Topics addressed include large-aperture mm-wave antennas, a cross-array radiometer for spacecraft applications, a technique for computing the propagation characteristics of optical fibers, an analog light-wave system for improving microwave-telemetry data communication, and a ground demonstration of an orbital-debris radar. Consideration is given to a verifiable autonomous satellite control system, Inmarsat second-generation satellites for mobile communication, automated tools for data-base design and criteria for their selection, and a desk-top simulation work station based on the DSP96002 microprocessor chip.
Non-fragile multivariable PID controller design via system augmentation
NASA Astrophysics Data System (ADS)
Liu, Jinrong; Lam, James; Shen, Mouquan; Shu, Zhan
2017-07-01
In this paper, the issue of designing non-fragile H∞ multivariable proportional-integral-derivative (PID) controllers with derivative filters is investigated. In order to obtain the controller gains, the original system is associated with an extended system such that the PID controller design can be formulated as a static output-feedback control problem. By taking the system augmentation approach, the conditions with slack matrices for solving the non-fragile H∞ multivariable PID controller gains are established. Based on the results, linear matrix inequality -based iterative algorithms are provided to compute the controller gains. Simulations are conducted to verify the effectiveness of the proposed approaches.
High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair.
Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K
2018-01-01
Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed.
High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair
Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y. K.
2018-01-01
Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed. PMID:29706894
Distributed collaborative response surface method for mechanical dynamic assembly reliability design
NASA Astrophysics Data System (ADS)
Bai, Guangchen; Fei, Chengwei
2013-11-01
Because of the randomness of many impact factors influencing the dynamic assembly relationship of complex machinery, the reliability analysis of dynamic assembly relationship needs to be accomplished considering the randomness from a probabilistic perspective. To improve the accuracy and efficiency of dynamic assembly relationship reliability analysis, the mechanical dynamic assembly reliability(MDAR) theory and a distributed collaborative response surface method(DCRSM) are proposed. The mathematic model of DCRSM is established based on the quadratic response surface function, and verified by the assembly relationship reliability analysis of aeroengine high pressure turbine(HPT) blade-tip radial running clearance(BTRRC). Through the comparison of the DCRSM, traditional response surface method(RSM) and Monte Carlo Method(MCM), the results show that the DCRSM is not able to accomplish the computational task which is impossible for the other methods when the number of simulation is more than 100 000 times, but also the computational precision for the DCRSM is basically consistent with the MCM and improved by 0.40˜4.63% to the RSM, furthermore, the computational efficiency of DCRSM is up to about 188 times of the MCM and 55 times of the RSM under 10000 times simulations. The DCRSM is demonstrated to be a feasible and effective approach for markedly improving the computational efficiency and accuracy of MDAR analysis. Thus, the proposed research provides the promising theory and method for the MDAR design and optimization, and opens a novel research direction of probabilistic analysis for developing the high-performance and high-reliability of aeroengine.
Fang, Pan; Hou, Yongjun; Nan, Yanghai
2015-01-01
A new mechanism is proposed to implement synchronization of the two unbalanced rotors in a vibration system, which consists of a double vibro-body, two induction motors and spring foundations. The coupling relationship between the vibro-bodies is ascertained with the Laplace transformation method for the dynamics equation of the system obtained with the Lagrange's equation. An analytical approach, the average method of modified small parameters, is employed to study the synchronization characteristics between the two unbalanced rotors, which is converted into that of existence and the stability of zero solutions for the non-dimensional differential equations of the angular velocity disturbance parameters. By assuming the disturbance parameters that infinitely approach to zero, the synchronization condition for the two rotors is obtained. It indicated that the absolute value of the residual torque between the two motors should be equal to or less than the maximum of their coupling torques. Meanwhile, the stability criterion of synchronization is derived with the Routh-Hurwitz method, and the region of the stable phase difference is confirmed. At last, computer simulations are preformed to verify the correctness of the approximate solution of the theoretical computation for the stable phase difference between the two unbalanced rotors, and the results of theoretical computation is in accordance with that of computer simulations. To sum up, only the parameters of the vibration system satisfy the synchronization condition and the stability criterion of the synchronization, the two unbalanced rotors can implement the synchronization operation.
Fang, Pan; Hou, Yongjun; Nan, Yanghai
2015-01-01
A new mechanism is proposed to implement synchronization of the two unbalanced rotors in a vibration system, which consists of a double vibro-body, two induction motors and spring foundations. The coupling relationship between the vibro-bodies is ascertained with the Laplace transformation method for the dynamics equation of the system obtained with the Lagrange’s equation. An analytical approach, the average method of modified small parameters, is employed to study the synchronization characteristics between the two unbalanced rotors, which is converted into that of existence and the stability of zero solutions for the non-dimensional differential equations of the angular velocity disturbance parameters. By assuming the disturbance parameters that infinitely approach to zero, the synchronization condition for the two rotors is obtained. It indicated that the absolute value of the residual torque between the two motors should be equal to or less than the maximum of their coupling torques. Meanwhile, the stability criterion of synchronization is derived with the Routh-Hurwitz method, and the region of the stable phase difference is confirmed. At last, computer simulations are preformed to verify the correctness of the approximate solution of the theoretical computation for the stable phase difference between the two unbalanced rotors, and the results of theoretical computation is in accordance with that of computer simulations. To sum up, only the parameters of the vibration system satisfy the synchronization condition and the stability criterion of the synchronization, the two unbalanced rotors can implement the synchronization operation. PMID:25993472
Simulation of dental collisions and occlusal dynamics in the virtual environment.
Stavness, I K; Hannam, A G; Tobias, D L; Zhang, X
2016-04-01
Semi-adjustable articulators have often been used to simulate occlusal dynamics, but advances in intra-oral scanning and computer software now enable dynamics to be modelled mathematically. Computer simulation of occlusal dynamics requires accurate virtual casts, records to register them and methods to handle mesh collisions during movement. Here, physical casts in a semi-adjustable articulator were scanned with a conventional clinical intra-oral scanner. A coordinate measuring machine was used to index their positions in intercuspation, protrusion, right and left laterotrusion, and to model features of the articulator. Penetrations between the indexed meshes were identified and resolved using restitution forces, and the final registrations were verified by distance measurements between dental landmarks at multiple sites. These sites were confirmed as closely approximating via measurements made from homologous transilluminated vinylpolysiloxane interocclusal impressions in the mounted casts. Movements between the indexed positions were simulated with two models in a custom biomechanical software platform. In model DENTAL, 6 degree-of-freedom movements were made to minimise deviation from a straight line path and also shaped by dynamic mesh collisions detected and resolved mathematically. In model ARTIC, the paths were further constrained by surfaces matching the control settings of the articulator. Despite these differences, the lower mid-incisor point paths were very similar in both models. The study suggests that mathematical simulation utilising interocclusal 'bite' registrations can closely replicate the primary movements of casts mounted in a semi-adjustable articulator. Additional indexing positions and appropriate software could, in some situations, replace the need for mechanical semi-adjustable articulation and/or its virtual representation. © 2015 John Wiley & Sons Ltd.
The end-to-end simulator for the E-ELT HIRES high resolution spectrograph
NASA Astrophysics Data System (ADS)
Genoni, M.; Landoni, M.; Riva, M.; Pariani, G.; Mason, E.; Di Marcantonio, P.; Disseau, K.; Di Varano, I.; Gonzalez, O.; Huke, P.; Korhonen, H.; Li Causi, Gianluca
2017-06-01
We present the design, architecture and results of the End-to-End simulator model of the high resolution spectrograph HIRES for the European Extremely Large Telescope (E-ELT). This system can be used as a tool to characterize the spectrograph both by engineers and scientists. The model allows to simulate the behavior of photons starting from the scientific object (modeled bearing in mind the main science drivers) to the detector, considering also calibration light sources, and allowing to perform evaluation of the different parameters of the spectrograph design. In this paper, we will detail the architecture of the simulator and the computational model which are strongly characterized by modularity and flexibility that will be crucial in the next generation astronomical observation projects like E-ELT due to of the high complexity and long-time design and development. Finally, we present synthetic images obtained with the current version of the End-to-End simulator based on the E-ELT HIRES requirements (especially high radial velocity accuracy). Once ingested in the Data reduction Software (DRS), they will allow to verify that the instrument design can achieve the radial velocity accuracy needed by the HIRES science cases.
A novel approach to simulate chest wall micro-motion for bio-radar life detection purpose
NASA Astrophysics Data System (ADS)
An, Qiang; Li, Zhao; Liang, Fulai; Chen, Fuming; Wang, Jianqi
2016-10-01
Volunteers are often recruited to serve as the detection targets during the research process of bio-radar life detection technology, in which the experiment results are highly susceptible to the physical status of different individuals (shape, posture, etc.). In order to objectively evaluate the radar system performance and life detection algorithms, a standard detection target is urgently needed. The paper first proposed a parameter quantitatively controllable system to simulate the chest wall micro-motion caused mainly by breathing and heart beating. Then, the paper continued to analyze the material and size selection of the scattering body mounted on the simulation system from the perspective of back scattering energy. The computational electromagnetic method was employed to determine the exact scattering body. Finally, on-site experiments were carried out to verify the reliability of the simulation platform utilizing an IR UWB bioradar. Experimental result shows that the proposed system can simulate a real human target from three aspects: respiration frequency, amplitude and body surface scattering energy. Thus, it can be utilized as a substitute for a human target in radar based non-contact life detection research in various scenarios.
Zero dimensional model of atmospheric SMD discharge and afterglow in humid air
NASA Astrophysics Data System (ADS)
Smith, Ryan; Kemaneci, Efe; Offerhaus, Bjoern; Stapelmann, Katharina; Peter Brinkmann, Ralph
2016-09-01
A novel mesh-like Surface Micro Discharge (SMD) device designed for surface wound treatment is simulated by multiple time-scaled zero-dimensional models. The chemical dynamics of the discharge are resolved in time at atmospheric pressure in humid conditions. Simulated are the particle densities of electrons, 26 ionic species, and 26 reactive neutral species including: O3, NO, and HNO3. The total of 53 described species are constrained by 624 reactions within the simulated plasma discharge volume. The neutral species are allowed to diffuse into a diffusive gas regime which is of primary interest. Two interdependent zero-dimensional models separated by nine orders of magnitude in temporal resolution are used to accomplish this; thereby reducing the computational load. Through variation of control parameters such as: ignition frequency, deposited power density, duty cycle, humidity level, and N2 content, the ideal operation conditions for the SMD device can be predicted. The described model has been verified by matching simulation parameters and comparing results to that of previous works. Current operating conditions of the experimental mesh-like SMD were matched and results are compared to the simulations. Work supported by SFB TR 87.
A novel method for energy harvesting simulation based on scenario generation
NASA Astrophysics Data System (ADS)
Wang, Zhe; Li, Taoshen; Xiao, Nan; Ye, Jin; Wu, Min
2018-06-01
Energy harvesting network (EHN) is a new form of computer networks. It converts ambient energy into usable electric energy and supply the electrical energy as a primary or secondary power source to the communication devices. However, most of the EHN uses the analytical probability distribution function to describe the energy harvesting process, which cannot accurately identify the actual situation for the lack of authenticity. We propose an EHN simulation method based on scenario generation in this paper. Firstly, instead of setting a probability distribution in advance, it uses optimal scenario reduction technology to generate representative scenarios in single period based on the historical data of the harvested energy. Secondly, it uses homogeneous simulated annealing algorithm to generate optimal daily energy harvesting scenario sequences to get a more accurate simulation of the random characteristics of the energy harvesting network. Then taking the actual wind power data as an example, the accuracy and stability of the method are verified by comparing with the real data. Finally, we cite an instance to optimize the network throughput, which indicate the feasibility and effectiveness of the method we proposed from the optimal solution and data analysis in energy harvesting simulation.
Gilbert, David
2016-01-01
Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems. PMID:27187178
Pârvu, Ovidiu; Gilbert, David
2016-01-01
Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew
2014-03-20
Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modelingmore » flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.« less
Simulation study on the maximum continuous working condition of a power plant boiler
NASA Astrophysics Data System (ADS)
Wang, Ning; Han, Jiting; Sun, Haitian; Cheng, Jiwei; Jing, Ying'ai; Li, Wenbo
2018-05-01
First of all, the boiler is briefly introduced to determine the mathematical model and the boundary conditions, then the boiler under the BMCR condition numerical simulation study, and then the BMCR operating temperature field analysis. According to the boiler actual test results and the hot BMCR condition boiler output test results, the simulation results are verified. The main conclusions are as follows: the position and size of the inscribed circle in the furnace and the furnace temperature distribution and test results under different elevation are compared and verified; Accuracy of numerical simulation results.
Unconditionally verifiable blind quantum computation
NASA Astrophysics Data System (ADS)
Fitzsimons, Joseph F.; Kashefi, Elham
2017-07-01
Blind quantum computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's input, output, and computation remain private. A desirable property for any BQC protocol is verification, whereby the client can verify with high probability whether the server has followed the instructions of the protocol or if there has been some deviation resulting in a corrupted output state. A verifiable BQC protocol can be viewed as an interactive proof system leading to consequences for complexity theory. We previously proposed [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science, Atlanta, 2009 (IEEE, Piscataway, 2009), p. 517] a universal and unconditionally secure BQC scheme where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. In this paper we extend that protocol with additional functionality allowing blind computational basis measurements, which we use to construct another verifiable BQC protocol based on a different class of resource states. We rigorously prove that the probability of failing to detect an incorrect output is exponentially small in a security parameter, while resource overhead remains polynomial in this parameter. This resource state allows entangling gates to be performed between arbitrary pairs of logical qubits with only constant overhead. This is a significant improvement on the original scheme, which required that all computations to be performed must first be put into a nearest-neighbor form, incurring linear overhead in the number of qubits. Such an improvement has important consequences for efficiency and fault-tolerance thresholds.
Incinerator ash dissolution model for the system: Plutonium, nitric acid and hydrofluoric acid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, E V
1988-06-01
This research accomplished two goals. The first was to develop a computer program to simulate a cascade dissolver system. This program would be used to predict the bulk rate of dissolution in incinerator ash. The other goal was to verify the model in a single-stage dissolver system using Dy/sub 2/O/sub 3/. PuO/sub 2/ (and all of the species in the incinerator ash) was assumed to exist as spherical particles. A model was used to calculate the bulk rate of plutonium oxide dissolution using fluoride as a catalyst. Once the bulk rate of PuO/sub 2/ dissolution and the dissolution rate ofmore » all soluble species were calculated, mass and energy balances were written. A computer program simulating the cascade dissolver system was then developed. Tests were conducted on a single-stage dissolver. A simulated incinerator ash mixture was made and added to the dissolver. CaF/sub 2/ was added to the mixture as a catalyst. A 9M HNO/sub 3/ solution was pumped into the dissolver system. Samples of the dissolver effluent were analyzed for dissolved and F concentrations. The computer program proved satisfactory in predicting the F concentrations in the dissolver effluent. The experimental sparge air flow rate was predicted to within 5.5%. The experimental percentage of solids dissolved (51.34%) compared favorably to the percentage of incinerator ash dissolved (47%) in previous work. No general conclusions on model verification could be reached. 56 refs., 11 figs., 24 tabs.« less
Single-pixel computational ghost imaging with helicity-dependent metasurface hologram.
Liu, Hong-Chao; Yang, Biao; Guo, Qinghua; Shi, Jinhui; Guan, Chunying; Zheng, Guoxing; Mühlenbernd, Holger; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang
2017-09-01
Different optical imaging techniques are based on different characteristics of light. By controlling the abrupt phase discontinuities with different polarized incident light, a metasurface can host a phase-only and helicity-dependent hologram. In contrast, ghost imaging (GI) is an indirect imaging modality to retrieve the object information from the correlation of the light intensity fluctuations. We report single-pixel computational GI with a high-efficiency reflective metasurface in both simulations and experiments. Playing a fascinating role in switching the GI target with different polarized light, the metasurface hologram generates helicity-dependent reconstructed ghost images and successfully introduces an additional security lock in a proposed optical encryption scheme based on the GI. The robustness of our encryption scheme is further verified with the vulnerability test. Building the first bridge between the metasurface hologram and the GI, our work paves the way to integrate their applications in the fields of optical communications, imaging technology, and security.
NASA Astrophysics Data System (ADS)
Kumar, Ajay; Raghuwanshi, Sanjeev Kumar
2016-06-01
The optical switching activity is one of the most essential phenomena in the optical domain. The electro-optic effect-based switching phenomena are applicable to generate some effective combinational and sequential logic circuits. The processing of digital computational technique in the optical domain includes some considerable advantages of optical communication technology, e.g. immunity to electro-magnetic interferences, compact size, signal security, parallel computing and larger bandwidth. The paper describes some efficient technique to implement single bit magnitude comparator and 1's complement calculator using the concepts of electro-optic effect. The proposed techniques are simulated on the MATLAB software. However, the suitability of the techniques is verified using the highly reliable Opti-BPM software. It is interesting to analyze the circuits in order to specify some optimized device parameter in order to optimize some performance affecting parameters, e.g. crosstalk, extinction ratio, signal losses through the curved and straight waveguide sections.
Single-pixel computational ghost imaging with helicity-dependent metasurface hologram
Liu, Hong-Chao; Yang, Biao; Guo, Qinghua; Shi, Jinhui; Guan, Chunying; Zheng, Guoxing; Mühlenbernd, Holger; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang
2017-01-01
Different optical imaging techniques are based on different characteristics of light. By controlling the abrupt phase discontinuities with different polarized incident light, a metasurface can host a phase-only and helicity-dependent hologram. In contrast, ghost imaging (GI) is an indirect imaging modality to retrieve the object information from the correlation of the light intensity fluctuations. We report single-pixel computational GI with a high-efficiency reflective metasurface in both simulations and experiments. Playing a fascinating role in switching the GI target with different polarized light, the metasurface hologram generates helicity-dependent reconstructed ghost images and successfully introduces an additional security lock in a proposed optical encryption scheme based on the GI. The robustness of our encryption scheme is further verified with the vulnerability test. Building the first bridge between the metasurface hologram and the GI, our work paves the way to integrate their applications in the fields of optical communications, imaging technology, and security. PMID:28913433
Hu, Yang; Li, Decai; Shu, Shi; Niu, Xiaodong
2016-02-01
Based on the Darcy-Brinkman-Forchheimer equation, a finite-volume computational model with lattice Boltzmann flux scheme is proposed for incompressible porous media flow in this paper. The fluxes across the cell interface are calculated by reconstructing the local solution of the generalized lattice Boltzmann equation for porous media flow. The time-scaled midpoint integration rule is adopted to discretize the governing equation, which makes the time step become limited by the Courant-Friedricks-Lewy condition. The force term which evaluates the effect of the porous medium is added to the discretized governing equation directly. The numerical simulations of the steady Poiseuille flow, the unsteady Womersley flow, the circular Couette flow, and the lid-driven flow are carried out to verify the present computational model. The obtained results show good agreement with the analytical, finite-difference, and/or previously published solutions.
Stabilized Finite Elements in FUN3D
NASA Technical Reports Server (NTRS)
Anderson, W. Kyle; Newman, James C.; Karman, Steve L.
2017-01-01
A Streamlined Upwind Petrov-Galerkin (SUPG) stabilized finite-element discretization has been implemented as a library into the FUN3D unstructured-grid flow solver. Motivation for the selection of this methodology is given, details of the implementation are provided, and the discretization for the interior scheme is verified for linear and quadratic elements by using the method of manufactured solutions. A methodology is also described for capturing shocks, and simulation results are compared to the finite-volume formulation that is currently the primary method employed for routine engineering applications. The finite-element methodology is demonstrated to be more accurate than the finite-volume technology, particularly on tetrahedral meshes where the solutions obtained using the finite-volume scheme can suffer from adverse effects caused by bias in the grid. Although no effort has been made to date to optimize computational efficiency, the finite-element scheme is competitive with the finite-volume scheme in terms of computer time to reach convergence.
Verification of low-Mach number combustion codes using the method of manufactured solutions
NASA Astrophysics Data System (ADS)
Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz
2007-11-01
Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.
Development of adaptive observation strategy using retrospective optimal interpolation
NASA Astrophysics Data System (ADS)
Noh, N.; Kim, S.; Song, H.; Lim, G.
2011-12-01
Retrospective optimal interpolation (ROI) is a method that is used to minimize cost functions with multiple minima without using adjoint models. Song and Lim (2011) perform the experiments to reduce the computational costs for implementing ROI by transforming the control variables into eigenvectors of background error covariance. We adapt the ROI algorithm to compute sensitivity estimates of severe weather events over the Korean peninsula. The eigenvectors of the ROI algorithm is modified every time the observations are assimilated. This implies that the modified eigenvectors shows the error distribution of control variables which are updated by assimilating observations. So, We can estimate the effects of the specific observations. In order to verify the adaptive observation strategy, High-impact weather over the Korean peninsula is simulated and interpreted using WRF modeling system and sensitive regions for each high-impact weather is calculated. The effects of assimilation for each observation type is discussed.
Computation of water hammer protection of modernized pumping station
NASA Astrophysics Data System (ADS)
Himr, Daniel
2014-03-01
Pumping station supplies water for irrigation. Maximal capacity 2 × 1.2m3·s-1 became insufficient, thus it was upgraded to 2 × 2m3·s-1. Paper is focused on design of protection against water hammer in case of sudden pumps trip. Numerical simulation of the most dangerous case (when pumps are giving the maximal flow rate) showed that existing air vessels were not able to protect the system and it would be necessary to add new vessels. Special care was paid to influence of their connection to the main pipeline, because the resistance of the connection has a significant impact on the scale of pressure pulsations. Finally, the pump trip was performed to verify if the system worked correctly. The test showed that pressure pulsations are lower (better) than computation predicted. This discrepancy was further analysed.
High-frequency AC/DC converter with unity power factor and minimum harmonic distortion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wernekinch, E.R.
1987-01-01
The power factor is controlled by adjusting the relative position of the fundamental component of an optimized PWM-type voltage with respect to the supply voltage. Current harmonic distortion is minimized by the use of optimized firing angles for the converter at a frequency where GTO's can be used. This feature makes this approach very attractive at power levels of 100 to 600 kW. To obtain the optimized PWM pattern, a steepest descent digital computer algorithm is used. Digital-computer simulations are performed and a low-power model is constructed and tested to verify the concepts and the behavior of the model. Experimentalmore » results show that unity power factor is achieved and that the distortion in the phase currents is 10.4% at 90% of full load. This is less than achievable with sinusoidal PWM, harmonic elimination, hysteresis control, and deadbeat control for the same switching frequency.« less
Effects of scale and Froude number on the hydraulics of waste stabilization ponds.
Vieira, Isabela De Luna; Da Silva, Jhonatan Barbosa; Ide, Carlos Nobuyoshi; Janzen, Johannes Gérson
2018-01-01
This paper presents the findings from a series of computational fluid dynamics simulations to estimate the effect of scale and Froude number on hydraulic performance and effluent pollutant fraction of scaled waste stabilization ponds designed using Froude similarity. Prior to its application, the model was verified by comparing the computational and experimental results of a model scaled pond, showing good agreement and confirming that the model accurately reproduces the hydrodynamics and tracer transport processes. Our results showed that the scale and the interaction between scale and Froude number has an effect on the hydraulics of ponds. At 1:5 scale, the increase of scale increased short-circuiting and decreased mixing. Furthermore, at 1:10 scale, the increase of scale decreased the effluent pollutant fraction. Since the Reynolds effect cannot be ignored, a ratio of Reynolds and Froude numbers was suggested to predict the effluent pollutant fraction for flows with different Reynolds numbers.
NASA Technical Reports Server (NTRS)
Westra, Doug G.; West, Jeffrey S.; Richardson, Brian R.
2015-01-01
Historically, the analysis and design of liquid rocket engines (LREs) has relied on full-scale testing and one-dimensional empirical tools. The testing is extremely expensive and the one-dimensional tools are not designed to capture the highly complex, and multi-dimensional features that are inherent to LREs. Recent advances in computational fluid dynamics (CFD) tools have made it possible to predict liquid rocket engine performance, stability, to assess the effect of complex flow features, and to evaluate injector-driven thermal environments, to mitigate the cost of testing. Extensive efforts to verify and validate these CFD tools have been conducted, to provide confidence for using them during the design cycle. Previous validation efforts have documented comparisons of predicted heat flux thermal environments with test data for a single element gaseous oxygen (GO2) and gaseous hydrogen (GH2) injector. The most notable validation effort was a comprehensive validation effort conducted by Tucker et al. [1], in which a number of different groups modeled a GO2/GH2 single element configuration by Pal et al [2]. The tools used for this validation comparison employed a range of algorithms, from both steady and unsteady Reynolds Averaged Navier-Stokes (U/RANS) calculations, large-eddy simulations (LES), detached eddy simulations (DES), and various combinations. A more recent effort by Thakur et al. [3] focused on using a state-of-the-art CFD simulation tool, Loci/STREAM, on a two-dimensional grid. Loci/STREAM was chosen because it has a unique, very efficient flamelet parameterization of combustion reactions that are too computationally expensive to simulate with conventional finite-rate chemistry calculations. The current effort focuses on further advancement of validation efforts, again using the Loci/STREAM tool with the flamelet parameterization, but this time with a three-dimensional grid. Comparisons to the Pal et al. heat flux data will be made for both RANS and Hybrid RANSLES/ Detached Eddy simulations (DES). Computation costs will be reported, along with comparison of accuracy and cost to much less expensive two-dimensional RANS simulations of the same geometry.
Three-Dimensional Simulation of Base Bleed Unit with AP/HTPB Propellant in Fast Cook-off Conditions
NASA Astrophysics Data System (ADS)
Li, Wen-feng; Yu, Yong-gang; Ye, Rui; Yang, Hou-wen
2017-07-01
In this work, a three-dimensional unsteady heat transfer model of base bleed unit with trilobite ammonium perchlorate (AP)/hydroxyl-terminated polybutadiene (HTPB) composite solid propellant is presented to analyze the cook-off characteristics. According to the two-step chemical reaction of AP/HTPB propellant, a small-scale cook-off test is established. A comparison of the experimental and calculated results is made to verify the rationality of the computation model. On this basis, a cook-off numerical simulation of the base bleed unit at the heating rates of 0.33, 0.58 and 0.83 K/s is presented to investigate the ignition and initiation characteristics. The results show that the ignitions occur on the head face of the AP/HTPB propellant and near the internal gas chamber in these conditions. As the heating rate increases, the runaway time decreases and the ignition temperature rises.
Investigation of contact pressure and influence function model for soft wheel polishing.
Rao, Zhimin; Guo, Bing; Zhao, Qingliang
2015-09-20
The tool influence function (TIF) is critical for calculating the dwell-time map to improve form accuracy. We present the TIF for the process of computer-controlled polishing with a soft polishing wheel. In this paper, the static TIF was developed based on the Preston equation. The pressure distribution was verified by the real removal spot section profiles. According to the experiment measurements, the pressure distribution simulated by Hertz contact theory was much larger than the real contact pressure. The simulated pressure distribution, which was modeled by the Winkler elastic foundation for a soft polishing wheel, matched the real contact pressure. A series of experiments was conducted to obtain the removal spot statistical properties for validating the relationship between material removal and processing time and contact pressure and relative velocity, along with calculating the fitted parameters to establish the TIF. The developed TIF predicted the removal character for the studied soft wheel polishing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pingenot, J; Rieben, R; White, D
2005-10-31
We present a computational study of signal propagation and attenuation of a 200 MHz planar loop antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The numerical technique is first verified against theoretical results for a planar loop antenna in a smooth lossy cave. The simulation is then performed for a series of random rough surface meshes in ordermore » to generate statistical data for the propagation and attenuation properties of the antenna in a cave environment. Results for the mean and variance of the power spectral density of the electric field are presented and discussed.« less
Tran-Minh, Nhut; Dong, Tao; Karlsen, Frank
2014-10-01
In this paper, a passive planar micromixer with ellipse-like micropillars is proposed to operate in the laminar flow regime for high mixing efficiency. With a splitting and recombination (SAR) concept, the diffusion distance of the fluids in a micromixer with ellipse-like micropillars was decreased. Thus, space usage for micromixer of an automatic sample collection system is also minimized. Numerical simulation was conducted to evaluate the performance of proposed micromixer by solving the governing Navier-Stokes equation and convection-diffusion equation. With software (COMSOL 4.3) for computational fluid dynamics (CFD) we simulated the mixing of fluids in a micromixer with ellipse-like micropillars and basic T-type mixer in a laminar flow regime. The efficiency of the proposed micromixer is shown in numerical results and is verified by measurement results. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Gas-injection-start and shutdown characteristics of a 2-kilowatt to 15-kilowatt Brayton power system
NASA Technical Reports Server (NTRS)
Cantoni, D. A.
1972-01-01
Two methods of starting the Brayton power system have been considered: (1) using the alternator as a motor to spin the Brayton rotating unit (BRU), and (2) spinning the BRU by forced gas injection. The first method requires the use of an auxiliary electrical power source. An alternating voltage is applied to the terminals of the alternator to drive it as an induction motor. Only gas-injection starts are discussed in this report. The gas-injection starting method requires high-pressure gas storage and valves to route the gas flow to provide correct BRU rotation. An analog computer simulation was used to size hardware and to determine safe start and shutdown procedures. The simulation was also used to define the range of conditions for successful startups. Experimental data were also obtained under various test conditions. These data verify the validity of the start and shutdown procedures.
Lagrangian study of transport of subarctic water across the Subpolar Front in the Japan Sea
NASA Astrophysics Data System (ADS)
Prants, Sergey V.; Uleysky, Michael Yu.; Budyansky, Maxim V.
2018-06-01
The southward near-surface transport of transformed subarctic water across the Subpolar Front in the Japan Sea is simulated and analyzed based on altimeter data from January 1, 1993 to December 31, 2017. Computing Lagrangian indicators for a large number of synthetic particles, advected by the AVISO velocity field, we find preferred transport pathways across the Subpolar Front. The southward transport occurs mainly in the central part of the frontal zone due to suitable dispositions of mesoscale eddies promoting propagation of subarctic water to the south. It is documented with the help of Lagrangian origin and L-maps and verified by the tracks of available drifters. The transport of transformed subarctic water to the south is compared with the transport of transformed subtropical water to the north simulated by Prants et al. (Nonlinear Process Geophys 24(1):89-99, 2017c).
NASA Astrophysics Data System (ADS)
Kim, Bong Sung; Chae, Heeyeop; Chung, Ho Kyoon; Cho, Sung Min
2018-06-01
The electrical and optical properties of tandem organic light-emitting diodes (OLEDs), in which a fluorescent and phosphorescent emitting units are connected by an organic charge-generation layer (CGL), were experimentally analyzed. To investigate the internal properties of the tandem OLEDs, we fabricated and compared two single, two homo-tandem, and two hetero-tandem OLEDs using the fluorescent and phosphorescent units. From the experimental results of the OLEDs obtained at the same current density, the voltage across the CGL as well as the individual emission spectra and luminance of each unit of tandem OLEDs were obtained and compared with the theoretical simulation results. The analysis method proposed in this study can be utilized as a method to verify the accuracy of optical or electrical computer simulation of tandem OLED and it will be useful to understand the overall electrical and optical characteristics of tandem OLEDs.
Lee, Won-Ho; Lee, Jong-Chul
2018-09-01
A numerical simulation was developed for magnetic nanoparticles in a liquid dielectric to investigate the AC breakdown voltage of the magnetic nanofluids according to the volume concentration of the magnetic nanoparticles. In prior research, we found that the dielectric breakdown voltage of the transformer oil-based magnetic nanofluids was positively or negatively affected according to the amount of magnetic nanoparticles under a testing condition of dielectric fluids, and the trajectory of the magnetic nanoparticles in a fabricated chip was visualized to verify the related phenomena via measurements and computations. In this study, a numerical simulation of magnetic nanoparticles in an insulating fluid was developed to model particle tracing for AC breakdown mechanisms happened to a sphere-sphere electrode configuration and to propose a possible mechanism regarding the change in the breakdown strength due to the behavior of the magnetic nanoparticles with different applied voltages.
Molecular Dynamics implementation of BN2D or 'Mercedes Benz' water model
NASA Astrophysics Data System (ADS)
Scukins, Arturs; Bardik, Vitaliy; Pavlov, Evgen; Nerukh, Dmitry
2015-05-01
Two-dimensional 'Mercedes Benz' (MB) or BN2D water model (Naim, 1971) is implemented in Molecular Dynamics. It is known that the MB model can capture abnormal properties of real water (high heat capacity, minima of pressure and isothermal compressibility, negative thermal expansion coefficient) (Silverstein et al., 1998). In this work formulas for calculating the thermodynamic, structural and dynamic properties in microcanonical (NVE) and isothermal-isobaric (NPT) ensembles for the model from Molecular Dynamics simulation are derived and verified against known Monte Carlo results. The convergence of the thermodynamic properties and the system's numerical stability are investigated. The results qualitatively reproduce the peculiarities of real water making the model a visually convenient tool that also requires less computational resources, thus allowing simulations of large (hydrodynamic scale) molecular systems. We provide the open source code written in C/C++ for the BN2D water model implementation using Molecular Dynamics.
Analysis of a Precambrian resonance-stabilized day length
NASA Astrophysics Data System (ADS)
Bartlett, Benjamin C.; Stevenson, David J.
2016-06-01
During the Precambrian era, Earth's decelerating rotation would have passed a 21 h period that would have been resonant with the semidiurnal atmospheric thermal tide. Near this point, the atmospheric torque would have been maximized, being comparable in magnitude but opposite in direction to the lunar torque, halting Earth's rotational deceleration, maintaining a constant day length, as detailed by Zahnle and Walker (1987). We develop a computational model to determine necessary conditions for formation and breakage of this resonant effect. Our simulations show the resonance to be resilient to atmospheric thermal noise but suggest a sudden atmospheric temperature increase like the deglaciation period following a possible "snowball Earth" near the end of the Precambrian would break this resonance; the Marinoan and Sturtian glaciations seem the most likely candidates for this event. Our model provides a simulated day length over time that resembles existing paleorotational data, though further data are needed to verify this hypothesis.
Lagrangian study of transport of subarctic water across the Subpolar Front in the Japan Sea
NASA Astrophysics Data System (ADS)
Prants, Sergey V.; Uleysky, Michael Yu.; Budyansky, Maxim V.
2018-05-01
The southward near-surface transport of transformed subarctic water across the Subpolar Front in the Japan Sea is simulated and analyzed based on altimeter data from January 1, 1993 to December 31, 2017. Computing Lagrangian indicators for a large number of synthetic particles, advected by the AVISO velocity field, we find preferred transport pathways across the Subpolar Front. The southward transport occurs mainly in the central part of the frontal zone due to suitable dispositions of mesoscale eddies promoting propagation of subarctic water to the south. It is documented with the help of Lagrangian origin and L-maps and verified by the tracks of available drifters. The transport of transformed subarctic water to the south is compared with the transport of transformed subtropical water to the north simulated by Prants et al. (Nonlinear Process Geophys 24(1):89-99, 2017c).
Sideways wall force produced during tokamak disruptions
NASA Astrophysics Data System (ADS)
Strauss, H.; Paccagnella, R.; Breslau, J.; Sugiyama, L.; Jardin, S.
2013-07-01
A critical issue for ITER is to evaluate the forces produced on the surrounding conducting structures during plasma disruptions. We calculate the non-axisymmetric ‘sideways’ wall force Fx, produced in disruptions. Simulations were carried out of disruptions produced by destabilization of n = 1 modes by a vertical displacement event (VDE). The force depends strongly on γτwall, where γ is the mode growth rate and τwall is the wall penetration time, and is largest for γτwall = constant, which depends on initial conditions. Simulations of disruptions caused by a model of massive gas injection were also performed. It was found that the wall force increases approximately offset linearly with the displacement from the magnetic axis produced by a VDE. These results are also obtained with an analytical model. Disruptions are accompanied by toroidal variation of the plasma current Iφ. This is caused by toroidal variation of the halo current, as verified computationally and analytically.
Minimal spanning trees at the percolation threshold: a numerical calculation
NASA Astrophysics Data System (ADS)
Sweeney, Sean; Middleton, A. Alan
2013-03-01
Through computer simulations on a hypercubic lattice, we grow minimal spanning trees (MSTs) in up to five dimensions and examine their fractal dimensions. Understanding MSTs is imporant for studying systems with quenched disorder such as spin glasses. We implement a combination of Prim's and Kruskal's algorithms for finding MSTs in order to reduce memory usage and allow for simulation of larger systems than would otherwise be possible. These fractal objects are analyzed in an attempt to numerically verify predictions of the perturbation expansion developed by T. S. Jackson and N. Read for the pathlength fractal dimension ds of MSTs on percolation clusters at criticality [T. S. Jackson and N. Read, Phys. Rev. E 81, 021131 (2010)]. Examining these trees also sparked the development of an analysis technique for dealing with correlated data that could be easily generalized to other systems and should be a robust method for analyzing a wide array of randomly generated fractal structures. This work was made possible in part by NSF Grant No. DMR-1006731 and by the Syracuse University Gravitation and Relativity computing cluster, which is supported in part by NSF Grant No. PHY-0600953.
NASA Astrophysics Data System (ADS)
Kruppa, Tobias; Neuhaus, Tim; Messina, René; Löwen, Hartmut
2012-04-01
A binary mixture of particles interacting via long-ranged repulsive forces is studied in gravity by computer simulation and theory. The more repulsive A-particles create a depletion zone of less repulsive B-particles around them reminiscent to a bubble. Applying Archimedes' principle effectively to this bubble, an A-particle can be lifted in a fluid background of B-particles. This "depletion bubble" mechanism explains and predicts a brazil-nut effect where the heavier A-particles float on top of the lighter B-particles. It also implies an effective attraction of an A-particle towards a hard container bottom wall which leads to boundary layering of A-particles. Additionally, we have studied a periodic inversion of gravity causing perpetuous mutual penetration of the mixture in a slit geometry. In this nonequilibrium case of time-dependent gravity, the boundary layering persists. Our results are based on computer simulations and density functional theory of a two-dimensional binary mixture of colloidal repulsive dipoles. The predicted effects also occur for other long-ranged repulsive interactions and in three spatial dimensions. They are therefore verifiable in settling experiments on dipolar or charged colloidal mixtures as well as in charged granulates and dusty plasmas.
Kruppa, Tobias; Neuhaus, Tim; Messina, René; Löwen, Hartmut
2012-04-07
A binary mixture of particles interacting via long-ranged repulsive forces is studied in gravity by computer simulation and theory. The more repulsive A-particles create a depletion zone of less repulsive B-particles around them reminiscent to a bubble. Applying Archimedes' principle effectively to this bubble, an A-particle can be lifted in a fluid background of B-particles. This "depletion bubble" mechanism explains and predicts a brazil-nut effect where the heavier A-particles float on top of the lighter B-particles. It also implies an effective attraction of an A-particle towards a hard container bottom wall which leads to boundary layering of A-particles. Additionally, we have studied a periodic inversion of gravity causing perpetuous mutual penetration of the mixture in a slit geometry. In this nonequilibrium case of time-dependent gravity, the boundary layering persists. Our results are based on computer simulations and density functional theory of a two-dimensional binary mixture of colloidal repulsive dipoles. The predicted effects also occur for other long-ranged repulsive interactions and in three spatial dimensions. They are therefore verifiable in settling experiments on dipolar or charged colloidal mixtures as well as in charged granulates and dusty plasmas.
NASA Astrophysics Data System (ADS)
Chen, Kun; Fu, Xing; Dorantes-Gonzalez, Dante J.; Lu, Zimo; Li, Tingting; Li, Yanning; Wu, Sen; Hu, Xiaotang
2014-07-01
Air pollution has been correlated to an increasing number of cases of human skin diseases in recent years. However, the investigation of human skin tissues has received only limited attention, to the point that there are not yet satisfactory modern detection technologies to accurately, noninvasively, and rapidly diagnose human skin at epidermis and dermis levels. In order to detect and analyze severe skin diseases such as melanoma, a finite element method (FEM) simulation study of the application of the laser-generated surface acoustic wave (LSAW) technique is developed. A three-layer human skin model is built, where LSAW's are generated and propagated, and their effects in the skin medium with melanoma are analyzed. Frequency domain analysis is used as a main tool to investigate such issues as minimum detectable size of melanoma, filtering spectra from noise and from computational irregularities, as well as on how the FEM model meshing size and computational capabilities influence the accuracy of the results. Based on the aforementioned aspects, the analysis of the signals under the scrutiny of the phase velocity dispersion curve is verified to be a reliable, a sensitive, and a promising approach for detecting and characterizing melanoma in human skin.
A momentum source model for wire-wrapped rod bundles—Concept, validation, and application
Hu, Rui; Fanning, Thomas H.
2013-06-19
Large uncertainties still exist in the treatment of wire-spacers and drag models for momentum transfer in current lumped parameter models. Here, to improve the hydraulic modeling of wire-wrap spacers in a rod bundle, a three-dimensional momentum source model (MSM) has been developed to model the anisotropic flow without the need to resolve the geometric details of the wire-wraps. The MSM is examined for 7-pin and 37-pin bundles steady-state simulations using the commercial CFD code STAR-CCM+. The calculated steady-state inter-subchannel cross flow velocities match very well in comparisons between bare bundles with the MSM applied and the wire-wrapped bundles with explicitmore » geometry. The validity of the model is further verified by mesh and parameter sensitivity studies. Furthermore, the MSM is applied to a 61-pin EBR-II experimental subassembly for both steady state and PLOF transient simulations. Reasonably accurate predictions of temperature, pressure, and fluid flow velocities have been achieved using the MSM for both steady-state and transient conditions. Significant computing resources are saved with the MSM since it can be used on a much coarser computational mesh.« less
Chen, Kun; Fu, Xing; Dorantes-Gonzalez, Dante J; Lu, Zimo; Li, Tingting; Li, Yanning; Wu, Sen; Hu, Xiaotang
2014-01-01
Air pollution has been correlated to an increasing number of cases of human skin diseases in recent years. However, the investigation of human skin tissues has received only limited attention, to the point that there are not yet satisfactory modern detection technologies to accurately, noninvasively, and rapidly diagnose human skin at epidermis and dermis levels. In order to detect and analyze severe skin diseases such as melanoma, a finite element method (FEM) simulation study of the application of the laser-generated surface acoustic wave (LSAW) technique is developed. A three-layer human skin model is built, where LSAW’s are generated and propagated, and their effects in the skin medium with melanoma are analyzed. Frequency domain analysis is used as a main tool to investigate such issues as minimum detectable size of melanoma, filtering spectra from noise and from computational irregularities, as well as on how the FEM model meshing size and computational capabilities influence the accuracy of the results. Based on the aforementioned aspects, the analysis of the signals under the scrutiny of the phase velocity dispersion curve is verified to be a reliable, a sensitive, and a promising approach for detecting and characterizing melanoma in human skin.
Human Motion Tracking and Glove-Based User Interfaces for Virtual Environments in ANVIL
NASA Technical Reports Server (NTRS)
Dumas, Joseph D., II
2002-01-01
The Army/NASA Virtual Innovations Laboratory (ANVIL) at Marshall Space Flight Center (MSFC) provides an environment where engineers and other personnel can investigate novel applications of computer simulation and Virtual Reality (VR) technologies. Among the many hardware and software resources in ANVIL are several high-performance Silicon Graphics computer systems and a number of commercial software packages, such as Division MockUp by Parametric Technology Corporation (PTC) and Jack by Unigraphics Solutions, Inc. These hardware and software platforms are used in conjunction with various VR peripheral I/O (input / output) devices, CAD (computer aided design) models, etc. to support the objectives of the MSFC Engineering Systems Department/Systems Engineering Support Group (ED42) by studying engineering designs, chiefly from the standpoint of human factors and ergonomics. One of the more time-consuming tasks facing ANVIL personnel involves the testing and evaluation of peripheral I/O devices and the integration of new devices with existing hardware and software platforms. Another important challenge is the development of innovative user interfaces to allow efficient, intuitive interaction between simulation users and the virtual environments they are investigating. As part of his Summer Faculty Fellowship, the author was tasked with verifying the operation of some recently acquired peripheral interface devices and developing new, easy-to-use interfaces that could be used with existing VR hardware and software to better support ANVIL projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacquelin, Mathias; De Jong, Wibe A.; Bylaska, Eric J.
2017-07-03
The Ab Initio Molecular Dynamics (AIMD) method allows scientists to treat the dynamics of molecular and condensed phase systems while retaining a first-principles-based description of their interactions. This extremely important method has tremendous computational requirements, because the electronic Schr¨odinger equation, approximated using Kohn-Sham Density Functional Theory (DFT), is solved at every time step. With the advent of manycore architectures, application developers have a significant amount of processing power within each compute node that can only be exploited through massive parallelism. A compute intensive application such as AIMD forms a good candidate to leverage this processing power. In this paper, wemore » focus on adding thread level parallelism to the plane wave DFT methodology implemented in NWChem. Through a careful optimization of tall-skinny matrix products, which are at the heart of the Lagrange multiplier and nonlocal pseudopotential kernels, as well as 3D FFTs, our OpenMP implementation delivers excellent strong scaling on the latest Intel Knights Landing (KNL) processor. We assess the efficiency of our Lagrange multiplier kernels by building a Roofline model of the platform, and verify that our implementation is close to the roofline for various problem sizes. Finally, we present strong scaling results on the complete AIMD simulation for a 64 water molecules test case, that scales up to all 68 cores of the Knights Landing processor.« less
Atomistic Structure, Strength, and Kinetic Properties of Intergranular Films in Ceramics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garofalini, Stephen H
2015-01-08
Intergranular films (IGFs) present in polycrystalline oxide and nitride ceramics provide an excellent example of nanoconfined glasses that occupy only a small volume percentage of the bulk ceramic, but can significantly influence various mechanical, thermal, chemical, and optical properties. By employing molecular dynamics computer simulations, we have been able to predict structures and the locations of atoms at the crystal/IGF interface that were subsequently verified with the newest electron microscopies. Modification of the chemistry of the crystal surface in the simulations provided the necessary mechanism for adsorption of specific rare earth ions from the IGF in the liquid state tomore » the crystal surface. Such results had eluded other computational approaches such as ab-initio calculations because of the need to include not only the modified chemistry of the crystal surfaces but also an accurate description of the adjoining glassy IGF. This segregation of certain ions from the IGF to the crystal caused changes in the local chemistry of the IGF that affected fracture behavior in the simulations. Additional work with the rare earth ions La and Lu in the silicon oxynitride IGFs showed the mechanisms for their different affects on crystal growth, even though both types of ions are seen adhering to a bounding crystal surface that would normally imply equivalent affects on grain growth.« less
Optical implementation of the synthetic discriminant function
NASA Astrophysics Data System (ADS)
Butler, S.; Riggins, J.
1984-10-01
Much attention is focused on the use of coherent optical pattern recognition (OPR) using matched spatial filters for robotics and intelligent systems. The OPR problem consists of three aspects -- information input, information processing, and information output. This paper discusses the information processing aspect which consists of choosing a filter to provide robust correlation with high efficiency. The filter should ideally be invariant to image shift, rotation and scale, provide a reasonable signal-to-noise (S/N) ratio and allow high throughput efficiency. The physical implementation of a spatial matched filter involves many choices. These include the use of conventional holograms or computer-generated holograms (CGH) and utilizing absorption or phase materials. Conventional holograms inherently modify the reference image by non-uniform emphasis of spatial frequencies. Proper use of film nonlinearity provides improved filter performance by emphasizing frequency ranges crucial to target discrimination. In the case of a CGH, the emphasis of the reference magnitude and phase can be controlled independently of the continuous tone or binary writing processes. This paper describes computer simulation and optical implementation of a geometrical shape and a Synthetic Discriminant Function (SDF) matched filter. The authors chose the binary Allebach-Keegan (AK) CGH algorithm to produce actual filters. The performances of these filters were measured to verify the simulation results. This paper provides a brief summary of the matched filter theory, the SDF, CGH algorithms, Phase-Only-Filtering, simulation procedures, and results.
Zhang, Jiawen; He, Shaohui; Wang, Dahai; Liu, Yangpeng; Yao, Wenbo; Liu, Xiabing
2018-01-01
Based on the operating Chegongzhuang heat-supplying tunnel in Beijing, the reliability of its lining structure under the action of large thrust and thermal effect is studied. According to the characteristics of a heat-supplying tunnel service, a three-dimensional numerical analysis model was established based on the mechanical tests on the in-situ specimens. The stress and strain of the tunnel structure were obtained before and after the operation. Compared with the field monitoring data, the rationality of the model was verified. After extracting the internal force of the lining structure, the improved method of subset simulation was proposed as the performance function to calculate the reliability of the main control section of the tunnel. In contrast to the traditional calculation method, the analytic relationship between the sample numbers in the subset simulation method and Monte Carlo method was given. The results indicate that the lining structure is greatly influenced by coupling in the range of six meters from the fixed brackets, especially the tunnel floor. The improved subset simulation method can greatly save computation time and improve computational efficiency under the premise of ensuring the accuracy of calculation. It is suitable for the reliability calculation of tunnel engineering, because “the lower the probability, the more efficient the calculation.” PMID:29401691
Development of advanced control schemes for telerobot manipulators
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Zhou, Zhen-Lei
1991-01-01
To study space applications of telerobotics, Goddard Space Flight Center (NASA) has recently built a testbed composed mainly of a pair of redundant slave arms having seven degrees of freedom and a master hand controller system. The mathematical developments required for the computerized simulation study and motion control of the slave arms are presented. The slave arm forward kinematic transformation is presented which is derived using the D-H notation and is then reduced to its most simplified form suitable for real-time control applications. The vector cross product method is then applied to obtain the slave arm Jacobian matrix. Using the developed forward kinematic transformation and quaternions representation of the slave arm end-effector orientation, computer simulation is conducted to evaluate the efficiency of the Jacobian in converting joint velocities into Cartesian velocities and to investigate the accuracy of the Jacobian pseudo-inverse for various sampling times. In addition, the equivalence between Cartesian velocities and quaternion is also verified using computer simulation. The motion control of the slave arm is examined. Three control schemes, the joint-space adaptive control scheme, the Cartesian adaptive control scheme, and the hybrid position/force control scheme are proposed for controlling the motion of the slave arm end-effector. Development of the Cartesian adaptive control scheme is presented and some preliminary results of the remaining control schemes are presented and discussed.
A generalised porous medium approach to study thermo-fluid dynamics in human eyes.
Mauro, Alessandro; Massarotti, Nicola; Salahudeen, Mohamed; Romano, Mario R; Romano, Vito; Nithiarasu, Perumal
2018-03-22
The present work describes the application of the generalised porous medium model to study heat and fluid flow in healthy and glaucomatous eyes of different subject specimens, considering the presence of ocular cavities and porous tissues. The 2D computational model, implemented into the open-source software OpenFOAM, has been verified against benchmark data for mixed convection in domains partially filled with a porous medium. The verified model has been employed to simulate the thermo-fluid dynamic phenomena occurring in the anterior section of four patient-specific human eyes, considering the presence of anterior chamber (AC), trabecular meshwork (TM), Schlemm's canal (SC), and collector channels (CC). The computational domains of the eye are extracted from tomographic images. The dependence of TM porosity and permeability on intraocular pressure (IOP) has been analysed in detail, and the differences between healthy and glaucomatous eye conditions have been highlighted, proving that the different physiological conditions of patients have a significant influence on the thermo-fluid dynamic phenomena. The influence of different eye positions (supine and standing) on thermo-fluid dynamic variables has been also investigated: results are presented in terms of velocity, pressure, temperature, friction coefficient and local Nusselt number. The results clearly indicate that porosity and permeability of TM are two important parameters that affect eye pressure distribution. Graphical abstract Velocity contours and vectors for healthy eyes (top) and glaucomatous eyes (bottom) for standing position.
An Efficient Location Verification Scheme for Static Wireless Sensor Networks.
Kim, In-Hwan; Kim, Bo-Sung; Song, JooSeok
2017-01-24
In wireless sensor networks (WSNs), the accuracy of location information is vital to support many interesting applications. Unfortunately, sensors have difficulty in estimating their location when malicious sensors attack the location estimation process. Even though secure localization schemes have been proposed to protect location estimation process from attacks, they are not enough to eliminate the wrong location estimations in some situations. The location verification can be the solution to the situations or be the second-line defense. The problem of most of the location verifications is the explicit involvement of many sensors in the verification process and requirements, such as special hardware, a dedicated verifier and the trusted third party, which causes more communication and computation overhead. In this paper, we propose an efficient location verification scheme for static WSN called mutually-shared region-based location verification (MSRLV), which reduces those overheads by utilizing the implicit involvement of sensors and eliminating several requirements. In order to achieve this, we use the mutually-shared region between location claimant and verifier for the location verification. The analysis shows that MSRLV reduces communication overhead by 77% and computation overhead by 92% on average, when compared with the other location verification schemes, in a single sensor verification. In addition, simulation results for the verification of the whole network show that MSRLV can detect the malicious sensors by over 90% when sensors in the network have five or more neighbors.
An Efficient Location Verification Scheme for Static Wireless Sensor Networks
Kim, In-hwan; Kim, Bo-sung; Song, JooSeok
2017-01-01
In wireless sensor networks (WSNs), the accuracy of location information is vital to support many interesting applications. Unfortunately, sensors have difficulty in estimating their location when malicious sensors attack the location estimation process. Even though secure localization schemes have been proposed to protect location estimation process from attacks, they are not enough to eliminate the wrong location estimations in some situations. The location verification can be the solution to the situations or be the second-line defense. The problem of most of the location verifications is the explicit involvement of many sensors in the verification process and requirements, such as special hardware, a dedicated verifier and the trusted third party, which causes more communication and computation overhead. In this paper, we propose an efficient location verification scheme for static WSN called mutually-shared region-based location verification (MSRLV), which reduces those overheads by utilizing the implicit involvement of sensors and eliminating several requirements. In order to achieve this, we use the mutually-shared region between location claimant and verifier for the location verification. The analysis shows that MSRLV reduces communication overhead by 77% and computation overhead by 92% on average, when compared with the other location verification schemes, in a single sensor verification. In addition, simulation results for the verification of the whole network show that MSRLV can detect the malicious sensors by over 90% when sensors in the network have five or more neighbors. PMID:28125007
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
NASA Technical Reports Server (NTRS)
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
RFI and SCRIMP Model Development and Verification
NASA Technical Reports Server (NTRS)
Loos, Alfred C.; Sayre, Jay
2000-01-01
Vacuum-Assisted Resin Transfer Molding (VARTM) processes are becoming promising technologies in the manufacturing of primary composite structures in the aircraft industry as well as infrastructure. A great deal of work still needs to be done on efforts to reduce the costly trial-and-error methods of VARTM processing that are currently in practice today. A computer simulation model of the VARTM process would provide a cost-effective tool in the manufacturing of composites utilizing this technique. Therefore, the objective of this research was to modify an existing three-dimensional, Resin Film Infusion (RFI)/Resin Transfer Molding (RTM) model to include VARTM simulation capabilities and to verify this model with the fabrication of aircraft structural composites. An additional objective was to use the VARTM model as a process analysis tool, where this tool would enable the user to configure the best process for manufacturing quality composites. Experimental verification of the model was performed by processing several flat composite panels. The parameters verified included flow front patterns and infiltration times. The flow front patterns were determined to be qualitatively accurate, while the simulated infiltration times over predicted experimental times by 8 to 10%. Capillary and gravitational forces were incorporated into the existing RFI/RTM model in order to simulate VARTM processing physics more accurately. The theoretical capillary pressure showed the capability to reduce the simulated infiltration times by as great as 6%. The gravity, on the other hand, was found to be negligible for all cases. Finally, the VARTM model was used as a process analysis tool. This enabled the user to determine such important process constraints as the location and type of injection ports and the permeability and location of the high-permeable media. A process for a three-stiffener composite panel was proposed. This configuration evolved from the variation of the process constraints in the modeling of several different composite panels. The configuration was proposed by considering such factors as: infiltration time, the number of vacuum ports, and possible areas of void entrapment.
NASA Astrophysics Data System (ADS)
Nagaso, Masaru; Komatitsch, Dimitri; Moysan, Joseph; Lhuillier, Christian
2018-01-01
ASTRID project, French sodium cooled nuclear reactor of 4th generation, is under development at the moment by Alternative Energies and Atomic Energy Commission (CEA). In this project, development of monitoring techniques for a nuclear reactor during operation are identified as a measure issue for enlarging the plant safety. Use of ultrasonic measurement techniques (e.g. thermometry, visualization of internal objects) are regarded as powerful inspection tools of sodium cooled fast reactors (SFR) including ASTRID due to opacity of liquid sodium. In side of a sodium cooling circuit, heterogeneity of medium occurs because of complex flow state especially in its operation and then the effects of this heterogeneity on an acoustic propagation is not negligible. Thus, it is necessary to carry out verification experiments for developments of component technologies, while such kind of experiments using liquid sodium may be relatively large-scale experiments. This is why numerical simulation methods are essential for preceding real experiments or filling up the limited number of experimental results. Though various numerical methods have been applied for a wave propagation in liquid sodium, we still do not have a method for verifying on three-dimensional heterogeneity. Moreover, in side of a reactor core being a complex acousto-elastic coupled region, it has also been difficult to simulate such problems with conventional methods. The objective of this study is to solve these 2 points by applying three-dimensional spectral element method. In this paper, our initial results on three-dimensional simulation study on heterogeneous medium (the first point) are shown. For heterogeneity of liquid sodium to be considered, four-dimensional temperature field (three spatial and one temporal dimension) calculated by computational fluid dynamics (CFD) with Large-Eddy Simulation was applied instead of using conventional method (i.e. Gaussian Random field). This three-dimensional numerical experiment yields that we could verify the effects of heterogeneity of propagation medium on waves in Liquid sodium.
Election Verifiability: Cryptographic Definitions and an Analysis of Helios and JCJ
2015-04-01
anonymous credentials. In CSF’14: 27th Computer Security Foundations Symposium. IEEE Computer Society, 2014. To appear. [22] David Chaum . Untraceable...electronic mail, return addresses, and digital pseudonyms. Communications of the ACM, 24(2):84–88, 1981. [23] David Chaum . Secret-ballot receipts...True voter-verifiable elections. IEEE Security and Privacy, 2(1):38–47, 2004. [24] David Chaum , Richard Carback, Jeremy Clark, Aleksander Essex, Stefan
Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin
2015-01-15
Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.