Sample records for powerful computational models

  1. Modeling and Analysis of Power Processing Systems. [use of a digital computer for designing power plants

    NASA Technical Reports Server (NTRS)

    Fegley, K. A.; Hayden, J. H.; Rehmann, D. W.

    1974-01-01

    The feasibility of formulating a methodology for the modeling and analysis of aerospace electrical power processing systems is investigated. It is shown that a digital computer may be used in an interactive mode for the design, modeling, analysis, and comparison of power processing systems.

  2. A dc model for power switching transistors suitable for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Wilson, P. M.; George, R. T., Jr.; Owen, H. A., Jr.; Wilson, T. G.

    1979-01-01

    The proposed dc model for bipolar junction power switching transistors is based on measurements which may be made with standard laboratory equipment. Those nonlinearities which are of importance to power electronics design are emphasized. Measurements procedures are discussed in detail. A model formulation adapted for use with a computer program is presented, and a comparison between actual and computer-generated results is made.

  3. Using a cloud to replenish parched groundwater modeling efforts.

    PubMed

    Hunt, Randall J; Luchette, Joseph; Schreuder, Willem A; Rumbaugh, James O; Doherty, John; Tonkin, Matthew J; Rumbaugh, Douglas B

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate "virtual" computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  4. Using a cloud to replenish parched groundwater modeling efforts

    USGS Publications Warehouse

    Hunt, Randall J.; Luchette, Joseph; Schreuder, Willem A.; Rumbaugh, James O.; Doherty, John; Tonkin, Matthew J.; Rumbaugh, Douglas B.

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate “virtual” computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  5. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  6. Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.

    1980-01-01

    Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.

  7. Computer modeling and simulators as part of university training for NPP operating personnel

    NASA Astrophysics Data System (ADS)

    Volman, M.

    2017-01-01

    This paper considers aspects of a program for training future nuclear power plant personnel developed by the NPP Department of Ivanovo State Power Engineering University. Computer modeling is used for numerical experiments on the kinetics of nuclear reactors in Mathcad. Simulation modeling is carried out on the computer and full-scale simulator of water-cooled power reactor for the simulation of neutron-physical reactor measurements and the start-up - shutdown process.

  8. Power combining in an array of microwave power rectifiers

    NASA Technical Reports Server (NTRS)

    Gutmann, R. J.; Borrego, J. M.

    1979-01-01

    This work analyzes the resultant efficiency degradation when identical rectifiers operate at different RF power levels as caused by the power beam taper. Both a closed-form analytical circuit model and a detailed computer-simulation model are used to obtain the output dc load line of the rectifier. The efficiency degradation is nearly identical with series and parallel combining, and the closed-form analytical model provides results which are similar to the detailed computer-simulation model.

  9. Using 3D infrared imaging to calibrate and refine computational fluid dynamic modeling for large computer and data centers

    NASA Astrophysics Data System (ADS)

    Stockton, Gregory R.

    2011-05-01

    Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deline, C.

    Computer modeling is able to predict the performance of distributed power electronics (microinverters, power optimizers) in PV systems. However, details about partial shade and other mismatch must be known in order to give the model accurate information to go on. This talk will describe recent updates in NREL’s System Advisor Model program to model partial shading losses with and without distributed power electronics, along with experimental validation results. Computer modeling is able to predict the performance of distributed power electronics (microinverters, power optimizers) in PV systems. However, details about partial shade and other mismatch must be known in order tomore » give the model accurate information to go on. This talk will describe recent updates in NREL’s System Advisor Model program to model partial shading losses.« less

  11. Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y; Glascoe, L

    The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less

  12. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    NASA Astrophysics Data System (ADS)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  13. Phosphoric acid fuel cell power plant system performance model and computer program

    NASA Technical Reports Server (NTRS)

    Alkasab, K. A.; Lu, C. Y.

    1984-01-01

    A FORTRAN computer program was developed for analyzing the performance of phosphoric acid fuel cell power plant systems. Energy mass and electrochemical analysis in the reformer, the shaft converters, the heat exchangers, and the fuel cell stack were combined to develop a mathematical model for the power plant for both atmospheric and pressurized conditions, and for several commercial fuels.

  14. Beam and Plasma Physics Research

    DTIC Science & Technology

    1990-06-01

    La di~raDy in high power microwave computations and thi-ory and high energy plasma computations and theory. The HPM computations concentrated on...2.1 REPORT INDEX 7 2.2 TASK AREA 2: HIGH-POWER RF EMISSION AND CHARGED- PARTICLE BEAM PHYSICS COMPUTATION , MODELING AND THEORY 10 2.2.1 Subtask 02-01...Vulnerability of Space Assets 22 2.2.6 Subtask 02-06, Microwave Computer Program Enhancements 22 2.2.7 Subtask 02-07, High-Power Microwave Transvertron Design 23

  15. Computational models of an inductive power transfer system for electric vehicle battery charge

    NASA Astrophysics Data System (ADS)

    Anele, A. O.; Hamam, Y.; Chassagne, L.; Linares, J.; Alayli, Y.; Djouani, K.

    2015-09-01

    One of the issues to be solved for electric vehicles (EVs) to become a success is the technical solution of its charging system. In this paper, computational models of an inductive power transfer (IPT) system for EV battery charge are presented. Based on the fundamental principles behind IPT systems, 3 kW single phase and 22 kW three phase IPT systems for Renault ZOE are designed in MATLAB/Simulink. The results obtained based on the technical specifications of the lithium-ion battery and charger type of Renault ZOE show that the models are able to provide the total voltage required by the battery. Also, considering the charging time for each IPT model, they are capable of delivering the electricity needed to power the ZOE. In conclusion, this study shows that the designed computational IPT models may be employed as a support structure needed to effectively power any viable EV.

  16. A dc model for power switching transistors suitable for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Wilson, P. M.; George, R. T., Jr.; Owen, H. A.; Wilson, T. G.

    1979-01-01

    A model for bipolar junction power switching transistors whose parameters can be readily obtained by the circuit design engineer, and which can be conveniently incorporated into standard computer-based circuit analysis programs is presented. This formulation results from measurements which may be made with standard laboratory equipment. Measurement procedures, as well as a comparison between actual and computed results, are presented.

  17. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  18. Impact of remote sensing upon the planning, management, and development of water resources

    NASA Technical Reports Server (NTRS)

    Loats, H. L.; Fowler, T. R.; Frech, S. L.

    1974-01-01

    A survey of the principal water resource users was conducted to determine the impact of new remote data streams on hydrologic computer models. The analysis of the responses and direct contact demonstrated that: (1) the majority of water resource effort of the type suitable to remote sensing inputs is conducted by major federal water resources agencies or through federally stimulated research, (2) the federal government develops most of the hydrologic models used in this effort; and (3) federal computer power is extensive. The computers, computer power, and hydrologic models in current use were determined.

  19. Faster than Real-Time Dynamic Simulation for Large-Size Power System with Detailed Dynamic Models using High-Performance Computing Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Jin, Shuangshuang; Chen, Yousu

    This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less

  20. Temperature Distribution Within a Defect-Free Silicon Carbide Diode Predicted by a Computational Model

    NASA Technical Reports Server (NTRS)

    Kuczmarski, Maria A.; Neudeck, Philip G.

    2000-01-01

    Most solid-state electronic devices diodes, transistors, and integrated circuits are based on silicon. Although this material works well for many applications, its properties limit its ability to function under extreme high-temperature or high-power operating conditions. Silicon carbide (SiC), with its desirable physical properties, could someday replace silicon for these types of applications. A major roadblock to realizing this potential is the quality of SiC material that can currently be produced. Semiconductors require very uniform, high-quality material, and commercially available SiC tends to suffer from defects in the crystalline structure that have largely been eliminated in silicon. In some power circuits, these defects can focus energy into an extremely small area, leading to overheating that can damage the device. In an effort to better understand the way that these defects affect the electrical performance and reliability of an SiC device in a power circuit, the NASA Glenn Research Center at Lewis Field began an in-house three-dimensional computational modeling effort. The goal is to predict the temperature distributions within a SiC diode structure subjected to the various transient overvoltage breakdown stresses that occur in power management circuits. A commercial computational fluid dynamics computer program (FLUENT-Fluent, Inc., Lebanon, New Hampshire) was used to build a model of a defect-free SiC diode and generate a computational mesh. A typical breakdown power density was applied over 0.5 msec in a heated layer at the junction between the p-type SiC and n-type SiC, and the temperature distribution throughout the diode was then calculated. The peak temperature extracted from the computational model agreed well (within 6 percent) with previous first-order calculations of the maximum expected temperature at the end of the breakdown pulse. This level of agreement is excellent for a model of this type and indicates that three-dimensional computational modeling can provide useful predictions for this class of problem. The model is now being extended to include the effects of crystal defects. The model will provide unique insights into how high the temperature rises in the vicinity of the defects in a diode at various power densities and pulse durations. This information also will help researchers in understanding and designing SiC devices for safe and reliable operation in high-power circuits.

  1. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less

  2. GPS synchronized power system phase angle measurements

    NASA Astrophysics Data System (ADS)

    Wilson, Robert E.; Sterlina, Patrick S.

    1994-09-01

    This paper discusses the use of Global Positioning System (GPS) synchronized equipment for the measurement and analysis of key power system quantities. Two GPS synchronized phasor measurement units (PMU) were installed before testing. It was indicated that PMUs recorded the dynamic response of the power system phase angles when the northern California power grid was excited by the artificial short circuits. Power system planning engineers perform detailed computer generated simulations of the dynamic response of the power system to naturally occurring short circuits. The computer simulations use models of transmission lines, transformers, circuit breakers, and other high voltage components. This work will compare computer simulations of the same event with field measurement.

  3. Quantum Computation

    NASA Astrophysics Data System (ADS)

    Aharonov, Dorit

    In the last few years, theoretical study of quantum systems serving as computational devices has achieved tremendous progress. We now have strong theoretical evidence that quantum computers, if built, might be used as a dramatically powerful computational tool, capable of performing tasks which seem intractable for classical computers. This review is about to tell the story of theoretical quantum computation. I l out the developing topic of experimental realizations of the model, and neglected other closely related topics which are quantum information and quantum communication. As a result of narrowing the scope of this paper, I hope it has gained the benefit of being an almost self contained introduction to the exciting field of quantum computation. The review begins with background on theoretical computer science, Turing machines and Boolean circuits. In light of these models, I define quantum computers, and discuss the issue of universal quantum gates. Quantum algorithms, including Shor's factorization algorithm and Grover's algorithm for searching databases, are explained. I will devote much attention to understanding what the origins of the quantum computational power are, and what the limits of this power are. Finally, I describe the recent theoretical results which show that quantum computers maintain their complexity power even in the presence of noise, inaccuracies and finite precision. This question cannot be separated from that of quantum complexity because any realistic model will inevitably be subjected to such inaccuracies. I tried to put all results in their context, asking what the implications to other issues in computer science and physics are. In the end of this review, I make these connections explicit by discussing the possible implications of quantum computation on fundamental physical questions such as the transition from quantum to classical physics.

  4. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    PubMed

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  5. A Prototype of Pilot Knowledge Evaluation by an Intelligent CAI (Computer -Aided Instruction) System Using a Bayesian Diagnostic Model.

    DTIC Science & Technology

    1987-06-01

    to a field of research called Computer-Aided Instruction (CAI). CAI is a powerful methodology for enhancing the overall quaiity and effectiveness of...provides a very powerful tool for statistical inference, especially when pooling informations from different source is appropriate. Thus. prior...04 , 2 ’ .. ."k, + ++ ,,;-+-,..,,..v ->’,0,,.’ I The power of the model lies in its ability to adapt a diagnostic session to the level of knowledge

  6. Computer modelling of technogenic thermal pollution zones in large water bodies

    NASA Astrophysics Data System (ADS)

    Parshakova, Ya N.; Lyubimova, T. P.

    2018-01-01

    In the present work, the thermal pollution zones created due to discharge of heated water from thermal power plants are investigated using the example of the Permskaya Thermal Power Plant (Permskaya TPP or Permskaya GRES), which is one of the largest thermal power plants in Europe. The study is performed for different technological and hydrometeorological conditions. Since the vertical temperature distribution in such wastewater reservoirs is highly inhomogeneous, the computations are performed in the framework of 3D model.

  7. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  8. Operations research investigations of satellite power stations

    NASA Technical Reports Server (NTRS)

    Cole, J. W.; Ballard, J. L.

    1976-01-01

    A systems model reflecting the design concepts of Satellite Power Stations (SPS) was developed. The model is of sufficient scope to include the interrelationships of the following major design parameters: the transportation to and between orbits; assembly of the SPS; and maintenance of the SPS. The systems model is composed of a set of equations that are nonlinear with respect to the system parameters and decision variables. The model determines a figure of merit from which alternative concepts concerning transportation, assembly, and maintenance of satellite power stations are studied. A hybrid optimization model was developed to optimize the system's decision variables. The optimization model consists of a random search procedure and the optimal-steepest descent method. A FORTRAN computer program was developed to enable the user to optimize nonlinear functions using the model. Specifically, the computer program was used to optimize Satellite Power Station system components.

  9. Agent-Based Multicellular Modeling for Predictive Toxicology

    EPA Science Inventory

    Biological modeling is a rapidly growing field that has benefited significantly from recent technological advances, expanding traditional methods with greater computing power, parameter-determination algorithms, and the development of novel computational approaches to modeling bi...

  10. Computing the Power-Density Spectrum for an Engineering Model

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1982-01-01

    Computer program for calculating of power-density spectrum (PDS) from data base generated by Advanced Continuous Simulation Language (ACSL) uses algorithm that employs fast Fourier transform (FFT) to calculate PDS of variable. Accomplished by first estimating autocovariance function of variable and then taking FFT of smoothed autocovariance function to obtain PDS. Fast-Fourier-transform technique conserves computer resources.

  11. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  12. Expression Templates for Truncated Power Series

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Shasharina, Svetlana G.

    1997-05-01

    Truncated power series are used extensively in accelerator transport modeling for rapid tracking and analysis of nonlinearity. Such mathematical objects are naturally represented computationally as objects in C++. This is more intuitive and produces more transparent code through operator overloading. However, C++ object use often comes with a computational speed loss due, e.g., to the creation of temporaries. We have developed a subset of truncated power series expression templates(http://monet.uwaterloo.ca/blitz/). Such expression templates use the powerful template processing facility of C++ to combine complicated expressions into series operations that exectute more rapidly. We compare computational speeds with existing truncated power series libraries.

  13. Modeling of power electronic systems with EMTP

    NASA Technical Reports Server (NTRS)

    Tam, Kwa-Sur; Dravid, Narayan V.

    1989-01-01

    In view of the potential impact of power electronics on power systems, there is need for a computer modeling/analysis tool to perform simulation studies on power systems with power electronic components as well as to educate engineering students about such systems. The modeling of the major power electronic components of the NASA Space Station Freedom Electric Power System is described along with ElectroMagnetic Transients Program (EMTP) and it is demonstrated that EMTP can serve as a very useful tool for teaching, design, analysis, and research in the area of power systems with power electronic components. EMTP modeling of power electronic circuits is described and simulation results are presented.

  14. On the Computational Power of Spiking Neural P Systems with Self-Organization.

    PubMed

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-06-10

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun.

  15. On the Computational Power of Spiking Neural P Systems with Self-Organization

    PubMed Central

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-01-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun. PMID:27283843

  16. On the Computational Power of Spiking Neural P Systems with Self-Organization

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-06-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun.

  17. Comparison of sound power radiation from isolated airfoils and cascades in a turbulent flow.

    PubMed

    Blandeau, Vincent P; Joseph, Phillip F; Jenkins, Gareth; Powles, Christopher J

    2011-06-01

    An analytical model of the sound power radiated from a flat plate airfoil of infinite span in a 2D turbulent flow is presented. The effects of stagger angle on the radiated sound power are included so that the sound power radiated upstream and downstream relative to the fan axis can be predicted. Closed-form asymptotic expressions, valid at low and high frequencies, are provided for the upstream, downstream, and total sound power. A study of the effects of chord length on the total sound power at all reduced frequencies is presented. Excellent agreement for frequencies above a critical frequency is shown between the fast analytical isolated airfoil model presented in this paper and an existing, computationally demanding, cascade model, in which the unsteady loading of the cascade is computed numerically. Reasonable agreement is also observed at low frequencies for low solidity cascade configurations. © 2011 Acoustical Society of America

  18. A Hierarchical Visualization Analysis Model of Power Big Data

    NASA Astrophysics Data System (ADS)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  19. Design analysis and computer-aided performance evaluation of shuttle orbiter electrical power system. Volume 2: SYSTID user's guide

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The manual for the use of the computer program SYSTID under the Univac operating system is presented. The computer program is used in the simulation and evaluation of the space shuttle orbiter electric power supply. The models described in the handbook are those which were available in the original versions of SYSTID. The subjects discussed are: (1) program description, (2) input language, (3) node typing, (4) problem submission, and (5) basic and power system SYSTID libraries.

  20. Terrestrial implications of mathematical modeling developed for space biomedical research

    NASA Technical Reports Server (NTRS)

    Lujan, Barbara F.; White, Ronald J.; Leonard, Joel I.; Srinivasan, R. Srini

    1988-01-01

    This paper summarizes several related research projects supported by NASA which seek to apply computer models to space medicine and physiology. These efforts span a wide range of activities, including mathematical models used for computer simulations of physiological control systems; power spectral analysis of physiological signals; pattern recognition models for detection of disease processes; and computer-aided diagnosis programs.

  1. Modeling and performance improvement of the constant power regulator systems in variable displacement axial piston pump.

    PubMed

    Park, Sung Hwan; Lee, Ji Min; Kim, Jong Shik

    2013-01-01

    An irregular performance of a mechanical-type constant power regulator is considered. In order to find the cause of an irregular discharge flow at the cut-off pressure area, modeling and numerical simulations are performed to observe dynamic behavior of internal parts of the constant power regulator system for a swashplate-type axial piston pump. The commercial numerical simulation software AMESim is applied to model the mechanical-type regulator with hydraulic pump and simulate the performance of it. The validity of the simulation model of the constant power regulator system is verified by comparing simulation results with experiments. In order to find the cause of the irregular performance of the mechanical-type constant power regulator system, the behavior of main components such as the spool, sleeve, and counterbalance piston is investigated using computer simulation. The shape modification of the counterbalance piston is proposed to improve the undesirable performance of the mechanical-type constant power regulator. The performance improvement is verified by computer simulation using AMESim software.

  2. Computer aided drug design

    NASA Astrophysics Data System (ADS)

    Jain, A.

    2017-08-01

    Computer based method can help in discovery of leads and can potentially eliminate chemical synthesis and screening of many irrelevant compounds, and in this way, it save time as well as cost. Molecular modeling systems are powerful tools for building, visualizing, analyzing and storing models of complex molecular structure that can help to interpretate structure activity relationship. The use of various techniques of molecular mechanics and dynamics and software in Computer aided drug design along with statistics analysis is powerful tool for the medicinal chemistry to synthesis therapeutic and effective drugs with minimum side effect.

  3. The inertial power and inertial force of robotic and natural bat wing

    NASA Astrophysics Data System (ADS)

    Yin, Dongfu; Zhang, Zhisheng

    2016-03-01

    Based on the acquired length and angle data of bat skeletons, a four-degree freedom robotic bat wing and an identical computational model with flap, sweep, elbow and wrist motions were presented. By considering the digits motions, a biomimetic bat skeleton model with seven-degree freedom was established as well. The effects of frequency, amplitude and downstroke ratio, as well as the components of inertial power and force on different directions, were studied. The experimental and computational results indicated that the inertial power and force accounted for the largest part on flap direction, the wing fold during upstroke could reduce the inertial power and force.

  4. Models of optical quantum computing

    NASA Astrophysics Data System (ADS)

    Krovi, Hari

    2017-03-01

    I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  5. Inexact hardware for modelling weather & climate

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, Tim

    2014-05-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.

  6. Conflicts of interest improve collective computation of adaptive social structures

    PubMed Central

    Brush, Eleanor R.; Krakauer, David C.; Flack, Jessica C.

    2018-01-01

    In many biological systems, the functional behavior of a group is collectively computed by the system’s individual components. An example is the brain’s ability to make decisions via the activity of billions of neurons. A long-standing puzzle is how the components’ decisions combine to produce beneficial group-level outputs, despite conflicts of interest and imperfect information. We derive a theoretical model of collective computation from mechanistic first principles, using results from previous work on the computation of power structure in a primate model system. Collective computation has two phases: an information accumulation phase, in which (in this study) pairs of individuals gather information about their fighting abilities and make decisions about their dominance relationships, and an information aggregation phase, in which these decisions are combined to produce a collective computation. To model information accumulation, we extend a stochastic decision-making model—the leaky integrator model used to study neural decision-making—to a multiagent game-theoretic framework. We then test alternative algorithms for aggregating information—in this study, decisions about dominance resulting from the stochastic model—and measure the mutual information between the resultant power structure and the “true” fighting abilities. We find that conflicts of interest can improve accuracy to the benefit of all agents. We also find that the computation can be tuned to produce different power structures by changing the cost of waiting for a decision. The successful application of a similar stochastic decision-making model in neural and social contexts suggests general principles of collective computation across substrates and scales. PMID:29376116

  7. Dynamics of global supply chain and electric power networks: Models, pricing analysis, and computations

    NASA Astrophysics Data System (ADS)

    Matsypura, Dmytro

    In this dissertation, I develop a new theoretical framework for the modeling, pricing analysis, and computation of solutions to electric power supply chains with power generators, suppliers, transmission service providers, and the inclusion of consumer demands. In particular, I advocate the application of finite-dimensional variational inequality theory, projected dynamical systems theory, game theory, network theory, and other tools that have been recently proposed for the modeling and analysis of supply chain networks (cf. Nagurney (2006)) to electric power markets. This dissertation contributes to the extant literature on the modeling, analysis, and solution of supply chain networks, including global supply chains, in general, and electric power supply chains, in particular, in the following ways. It develops a theoretical framework for modeling, pricing analysis, and computation of electric power flows/transactions in electric power systems using the rationale for supply chain analysis. The models developed include both static and dynamic ones. The dissertation also adds a new dimension to the methodology of the theory of projected dynamical systems by proving that, irrespective of the speeds of adjustment, the equilibrium of the system remains the same. Finally, I include alternative fuel suppliers, along with their behavior into the supply chain modeling and analysis framework. This dissertation has strong practical implications. In an era in which technology and globalization, coupled with increasing risk and uncertainty, complicate electricity demand and supply within and between nations, the successful management of electric power systems and pricing become increasingly pressing topics with relevance not only for economic prosperity but also national security. This dissertation addresses such related topics by providing models, pricing tools, and algorithms for decentralized electric power supply chains. This dissertation is based heavily on the following coauthored papers: Nagurney, Cruz, and Matsypura (2003), Nagurney and Matsypura (2004, 2005, 2006), Matsypura and Nagurney (2005), Matsypura, Nagurney, and Liu (2006).

  8. Identified state-space prediction model for aero-optical wavefronts

    NASA Astrophysics Data System (ADS)

    Faghihi, Azin; Tesch, Jonathan; Gibson, Steve

    2013-07-01

    A state-space disturbance model and associated prediction filter for aero-optical wavefronts are described. The model is computed by system identification from a sequence of wavefronts measured in an airborne laboratory. Estimates of the statistics and flow velocity of the wavefront data are shown and can be computed from the matrices in the state-space model without returning to the original data. Numerical results compare velocity values and power spectra computed from the identified state-space model with those computed from the aero-optical data.

  9. Biologically inspired collision avoidance system for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Ortiz, Fernando E.; Graham, Brett; Spagnoli, Kyle; Kelmelis, Eric J.

    2009-05-01

    In this project, we collaborate with researchers in the neuroscience department at the University of Delaware to develop an Field Programmable Gate Array (FPGA)-based embedded computer, inspired by the brains of small vertebrates (fish). The mechanisms of object detection and avoidance in fish have been extensively studied by our Delaware collaborators. The midbrain optic tectum is a biological multimodal navigation controller capable of processing input from all senses that convey spatial information, including vision, audition, touch, and lateral-line (water current sensing in fish). Unfortunately, computational complexity makes these models too slow for use in real-time applications. These simulations are run offline on state-of-the-art desktop computers, presenting a gap between the application and the target platform: a low-power embedded device. EM Photonics has expertise in developing of high-performance computers based on commodity platforms such as graphic cards (GPUs) and FPGAs. FPGAs offer (1) high computational power, low power consumption and small footprint (in line with typical autonomous vehicle constraints), and (2) the ability to implement massively-parallel computational architectures, which can be leveraged to closely emulate biological systems. Combining UD's brain modeling algorithms and the power of FPGAs, this computer enables autonomous navigation in complex environments, and further types of onboard neural processing in future applications.

  10. Energy regeneration model of self-consistent field of electron beams into electric power*

    NASA Astrophysics Data System (ADS)

    Kazmin, B. N.; Ryzhov, D. R.; Trifanov, I. V.; Snezhko, A. A.; Savelyeva, M. V.

    2016-04-01

    We consider physic-mathematical models of electric processes in electron beams, conversion of beam parameters into electric power values and their transformation into users’ electric power grid (onboard spacecraft network). We perform computer simulation validating high energy efficiency of the studied processes to be applied in the electric power technology to produce the power as well as electric power plants and propulsion installation in the spacecraft.

  11. Heterotic computing: exploiting hybrid computational devices.

    PubMed

    Kendon, Viv; Sebald, Angelika; Stepney, Susan

    2015-07-28

    Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  12. Computational modeling of Radioisotope Thermoelectric Generators (RTG) for interplanetary and deep space travel

    NASA Astrophysics Data System (ADS)

    Nejat, Cyrus; Nejat, Narsis; Nejat, Najmeh

    2014-06-01

    This research project is part of Narsis Nejat Master of Science thesis project that it is done at Shiraz University. The goals of this research are to make a computer model to evaluate the thermal power, electrical power, amount of emitted/absorbed dose, and amount of emitted/absorbed dose rate for static Radioisotope Thermoelectric Generators (RTG)s that is include a comprehensive study of the types of RTG systems and in particular RTG’s fuel resulting from both natural and artificial isotopes, calculation of the permissible dose radioisotope selected from the above, and conceptual design modeling and comparison between several NASA made RTGs with the project computer model pointing out the strong and weakness points for using this model in nuclear industries for simulation. The heat is being converted to electricity by two major methods in RTGs: static conversion and dynamic conversion. The model that is created for this project is for RTGs that heat is being converted to electricity statically. The model approximates good results as being compared with SNAP-3, SNAP-19, MHW, and GPHS RTGs in terms of electrical power, efficiency, specific power, and types of the mission and amount of fuel mass that is required to accomplish the mission.

  13. Mobile high-performance computing (HPC) for synthetic aperture radar signal processing

    NASA Astrophysics Data System (ADS)

    Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen

    2018-04-01

    The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.

  14. Toward an automated parallel computing environment for geosciences

    NASA Astrophysics Data System (ADS)

    Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping

    2007-08-01

    Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.

  15. Proates a computer modelling system for power plant: Its description and application to heatrate improvement within PowerGen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, C.H.; Ready, A.B.; Rea, J.

    1995-06-01

    Versions of the computer program PROATES (PROcess Analysis for Thermal Energy Systems) have been used since 1979 to analyse plant performance improvement proposals relating to existing plant and also to evaluate new plant designs. Several plant modifications have been made to improve performance based on the model predictions and the predicted performance has been realised in practice. The program was born out of a need to model the overall steady state performance of complex plant to enable proposals to change plant component items or operating strategy to be evaluated. To do this with confidence it is necessary to model themore » multiple thermodynamic interactions between the plant components. The modelling system is modular in concept allowing the configuration of individual plant components to represent any particular power plant design. A library exists of physics based modules which have been extensively validated and which provide representations of a wide range of boiler, turbine and CW system components. Changes to model data and construction is achieved via a user friendly graphical model editing/analysis front-end with results being presented via the computer screen or hard copy. The paper describes briefly the modelling system but concentrates mainly on the application of the modelling system to assess design re-optimisation, firing with different fuels and the re-powering of an existing plant.« less

  16. Reliable and More Powerful Methods for Power Analysis in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Zhang, Zhiyong; Zhao, Yanyun

    2017-01-01

    The normal-distribution-based likelihood ratio statistic T[subscript ml] = nF[subscript ml] is widely used for power analysis in structural Equation modeling (SEM). In such an analysis, power and sample size are computed by assuming that T[subscript ml] follows a central chi-square distribution under H[subscript 0] and a noncentral chi-square…

  17. The power of an ontology-driven developmental toxicity database for data mining and computational modeling

    EPA Science Inventory

    Modeling of developmental toxicology presents a significant challenge to computational toxicology due to endpoint complexity and lack of data coverage. These challenges largely account for the relatively few modeling successes using the structure–activity relationship (SAR) parad...

  18. Cyber-workstation for computational neuroscience.

    PubMed

    Digiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C; Fortes, Jose; Sanchez, Justin C

    2010-01-01

    A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface.

  19. Cyber-Workstation for Computational Neuroscience

    PubMed Central

    DiGiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C.; Fortes, Jose; Sanchez, Justin C.

    2009-01-01

    A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface. PMID:20126436

  20. Modeling and Performance Improvement of the Constant Power Regulator Systems in Variable Displacement Axial Piston Pump

    PubMed Central

    Park, Sung Hwan; Lee, Ji Min; Kim, Jong Shik

    2013-01-01

    An irregular performance of a mechanical-type constant power regulator is considered. In order to find the cause of an irregular discharge flow at the cut-off pressure area, modeling and numerical simulations are performed to observe dynamic behavior of internal parts of the constant power regulator system for a swashplate-type axial piston pump. The commercial numerical simulation software AMESim is applied to model the mechanical-type regulator with hydraulic pump and simulate the performance of it. The validity of the simulation model of the constant power regulator system is verified by comparing simulation results with experiments. In order to find the cause of the irregular performance of the mechanical-type constant power regulator system, the behavior of main components such as the spool, sleeve, and counterbalance piston is investigated using computer simulation. The shape modification of the counterbalance piston is proposed to improve the undesirable performance of the mechanical-type constant power regulator. The performance improvement is verified by computer simulation using AMESim software. PMID:24282389

  1. Estimation of Faults in DC Electrical Power System

    NASA Technical Reports Server (NTRS)

    Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott

    2009-01-01

    This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.

  2. Simulation Models for the Electric Power Requirements in a Guideway Transit System

    DOT National Transportation Integrated Search

    1980-04-01

    This report describes a computer simulation model developed at the Transportation Systems Center to study the electrical power distribution characteristics of Automated Guideway Transit (AGT) systems. The objective of this simulation effort is to pro...

  3. A sediment graph model based on SCS-CN method

    NASA Astrophysics Data System (ADS)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  4. PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.

    1979-10-01

    The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.

  5. Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS). Phase 1: Users handbook

    NASA Technical Reports Server (NTRS)

    Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.

    1986-01-01

    The EASY5 macro component models developed for the spacecraft power system simulation are described. A brief explanation about how to use the macro components with the EASY5 Standard Components to build a specific system is given through an example. The macro components are ordered according to the following functional group: converter power stage models, compensator models, current-feedback models, constant frequency control models, load models, solar array models, and shunt regulator models. Major equations, a circuit model, and a program listing are provided for each macro component.

  6. Manual of phosphoric acid fuel cell power plant cost model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    Cost analysis of phosphoric acid fuel cell power plant includes two parts: a method for estimation of system capital costs, and an economic analysis which determines the levelized annual cost of operating the system used in the capital cost estimation. A FORTRAN computer has been developed for this cost analysis.

  7. Modeling of power transmission and stress grading for corona protection

    NASA Astrophysics Data System (ADS)

    Zohdi, T. I.; Abali, B. E.

    2017-11-01

    Electrical high voltage (HV) machines are prone to corona discharges leading to power losses as well as damage of the insulating layer. Many different techniques are applied as corona protection and computational methods aid to select the best design. In this paper we develop a reduced-order model in 1D estimating electric field and temperature distribution of a conductor wrapped with different layers, as usual for HV-machines. Many assumptions and simplifications are undertaken for this 1D model, therefore, we compare its results to a direct numerical simulation in 3D quantitatively. Both models are transient and nonlinear, giving a possibility to quickly estimate in 1D or fully compute in 3D by a computational cost. Such tools enable understanding, evaluation, and optimization of corona shielding systems for multilayered coils.

  8. Evaluation of the Lattice-Boltzmann Equation Solver PowerFLOW for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Luo, Li-Shi; Singer, Bart A.; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    A careful comparison of the performance of a commercially available Lattice-Boltzmann Equation solver (Power-FLOW) was made with a conventional, block-structured computational fluid-dynamics code (CFL3D) for the flow over a two-dimensional NACA-0012 airfoil. The results suggest that the version of PowerFLOW used in the investigation produced solutions with large errors in the computed flow field; these errors are attributed to inadequate resolution of the boundary layer for reasons related to grid resolution and primitive turbulence modeling. The requirement of square grid cells in the PowerFLOW calculations limited the number of points that could be used to span the boundary layer on the wing and still keep the computation size small enough to fit on the available computers. Although not discussed in detail, disappointing results were also obtained with PowerFLOW for a cavity flow and for the flow around a generic helicopter configuration.

  9. A parallel implementation of an off-lattice individual-based model of multicellular populations

    NASA Astrophysics Data System (ADS)

    Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe

    2015-07-01

    As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.

  10. Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS), phase 1

    NASA Technical Reports Server (NTRS)

    Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.

    1986-01-01

    The large-signal behaviors of a regulator depend largely on the type of power circuit topology and control. Thus, for maximum flexibility, it is best to develop models for each functional block a independent modules. A regulator can then be configured by collecting appropriate pre-defined modules for each functional block. In order to complete the component model generation for a comprehensive spacecraft power system, the following modules were developed: solar array switching unit and control; shunt regulators; and battery discharger. The capability of each module is demonstrated using a simplified Direct Energy Transfer (DET) system. Large-signal behaviors of solar array power systems were analyzed. Stability of the solar array system operating points with a nonlinear load is analyzed. The state-plane analysis illustrates trajectories of the system operating point under various conditions. Stability and transient responses of the system operating near the solar array's maximum power point are also analyzed. The solar array system mode of operation is described using the DET spacecraft power system. The DET system is simulated for various operating conditions. Transfer of the software program CAMAPPS (Computer Aided Modeling and Analysis of Power Processing Systems) to NASA/GSFC (Goddard Space Flight Center) was accomplished.

  11. Model implementation for dynamic computation of system cost

    NASA Astrophysics Data System (ADS)

    Levri, J.; Vaccari, D.

    The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.

  12. Quantum computation with indefinite causal structures

    NASA Astrophysics Data System (ADS)

    Araújo, Mateus; Guérin, Philippe Allard; Baumeler, ńmin

    2017-11-01

    One way to study the physical plausibility of closed timelike curves (CTCs) is to examine their computational power. This has been done for Deutschian CTCs (D-CTCs) and postselection CTCs (P-CTCs), with the result that they allow for the efficient solution of problems in PSPACE and PP, respectively. Since these are extremely powerful complexity classes, which are not expected to be solvable in reality, this can be taken as evidence that these models for CTCs are pathological. This problem is closely related to the nonlinearity of this models, which also allows, for example, cloning quantum states, in the case of D-CTCs, or distinguishing nonorthogonal quantum states, in the case of P-CTCs. In contrast, the process matrix formalism allows one to model indefinite causal structures in a linear way, getting rid of these effects and raising the possibility that its computational power is rather tame. In this paper, we show that process matrices correspond to a linear particular case of P-CTCs, and therefore that its computational power is upperbounded by that of PP. We show, furthermore, a family of processes that can violate causal inequalities but nevertheless can be simulated by a causally ordered quantum circuit with only a constant overhead, showing that indefinite causality is not necessarily hard to simulate.

  13. Modeling nonlinear ultrasound propagation in heterogeneous media with power law absorption using a k-space pseudospectral method.

    PubMed

    Treeby, Bradley E; Jaros, Jiri; Rendell, Alistair P; Cox, B T

    2012-06-01

    The simulation of nonlinear ultrasound propagation through tissue realistic media has a wide range of practical applications. However, this is a computationally difficult problem due to the large size of the computational domain compared to the acoustic wavelength. Here, the k-space pseudospectral method is used to reduce the number of grid points required per wavelength for accurate simulations. The model is based on coupled first-order acoustic equations valid for nonlinear wave propagation in heterogeneous media with power law absorption. These are derived from the equations of fluid mechanics and include a pressure-density relation that incorporates the effects of nonlinearity, power law absorption, and medium heterogeneities. The additional terms accounting for convective nonlinearity and power law absorption are expressed as spatial gradients making them efficient to numerically encode. The governing equations are then discretized using a k-space pseudospectral technique in which the spatial gradients are computed using the Fourier-collocation method. This increases the accuracy of the gradient calculation and thus relaxes the requirement for dense computational grids compared to conventional finite difference methods. The accuracy and utility of the developed model is demonstrated via several numerical experiments, including the 3D simulation of the beam pattern from a clinical ultrasound probe.

  14. A linear, separable two-parameter model for dual energy CT imaging of proton stopping power computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Dong, E-mail: radon.han@gmail.com; Williamson, Jeffrey F.; Siebers, Jeffrey V.

    2016-01-15

    Purpose: To evaluate the accuracy and robustness of a simple, linear, separable, two-parameter model (basis vector model, BVM) in mapping proton stopping powers via dual energy computed tomography (DECT) imaging. Methods: The BVM assumes that photon cross sections (attenuation coefficients) of unknown materials are linear combinations of the corresponding radiological quantities of dissimilar basis substances (i.e., polystyrene, CaCl{sub 2} aqueous solution, and water). The authors have extended this approach to the estimation of electron density and mean excitation energy, which are required parameters for computing proton stopping powers via the Bethe–Bloch equation. The authors compared the stopping power estimation accuracymore » of the BVM with that of a nonlinear, nonseparable photon cross section Torikoshi parametric fit model (VCU tPFM) as implemented by the authors and by Yang et al. [“Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues,” Phys. Med. Biol. 55, 1343–1362 (2010)]. Using an idealized monoenergetic DECT imaging model, proton ranges estimated by the BVM, VCU tPFM, and Yang tPFM were compared to International Commission on Radiation Units and Measurements (ICRU) published reference values. The robustness of the stopping power prediction accuracy of tissue composition variations was assessed for both of the BVM and VCU tPFM. The sensitivity of accuracy to CT image uncertainty was also evaluated. Results: Based on the authors’ idealized, error-free DECT imaging model, the root-mean-square error of BVM proton stopping power estimation for 175 MeV protons relative to ICRU reference values for 34 ICRU standard tissues is 0.20%, compared to 0.23% and 0.68% for the Yang and VCU tPFM models, respectively. The range estimation errors were less than 1 mm for the BVM and Yang tPFM models, respectively. The BVM estimation accuracy is not dependent on tissue type and proton energy range. The BVM is slightly more vulnerable to CT image intensity uncertainties than the tPFM models. Both the BVM and tPFM prediction accuracies were robust to uncertainties of tissue composition and independent of the choice of reference values. This reported accuracy does not include the impacts of I-value uncertainties and imaging artifacts and may not be achievable on current clinical CT scanners. Conclusions: The proton stopping power estimation accuracy of the proposed linear, separable BVM model is comparable to or better than that of the nonseparable tPFM models proposed by other groups. In contrast to the tPFM, the BVM does not require an iterative solving for effective atomic number and electron density at every voxel; this improves the computational efficiency of DECT imaging when iterative, model-based image reconstruction algorithms are used to minimize noise and systematic imaging artifacts of CT images.« less

  15. DET/MPS - The GSFC Energy Balance Programs

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1994-01-01

    Direct Energy Transfer (DET) and MultiMission Spacecraft Modular Power System (MPS) computer programs perform mathematical modeling and simulation to aid in design and analysis of DET and MPS spacecraft power system performance in order to determine energy balance of subsystem. DET spacecraft power system feeds output of solar photovoltaic array and nickel cadmium batteries directly to spacecraft bus. MPS system, Standard Power Regulator Unit (SPRU) utilized to operate array at array's peak power point. DET and MPS perform minute-by-minute simulation of performance of power system. Results of simulation focus mainly on output of solar array and characteristics of batteries. Both packages limited in terms of orbital mechanics, they have sufficient capability to calculate data on eclipses and performance of arrays for circular or near-circular orbits. DET and MPS written in FORTRAN-77 with some VAX FORTRAN-type extensions. Both available in three versions: GSC-13374, for DEC VAX-series computers running VMS. GSC-13443, for UNIX-based computers. GSC-13444, for Apple Macintosh computers.

  16. Computational model of retinal photocoagulation and rupture

    NASA Astrophysics Data System (ADS)

    Sramek, Christopher; Paulus, Yannis M.; Nomoto, Hiroyuki; Huie, Phil; Palanker, Daniel

    2009-02-01

    In patterned scanning laser photocoagulation, shorter duration (< 20 ms) pulses help reduce thermal damage beyond the photoreceptor layer, decrease treatment time and minimize pain. However, safe therapeutic window (defined as the ratio of rupture threshold power to that of light coagulation) decreases for shorter exposures. To quantify the extent of thermal damage in the retina, and maximize the therapeutic window, we developed a computational model of retinal photocoagulation and rupture. Model parameters were adjusted to match measured thresholds of vaporization, coagulation, and retinal pigment epithelial (RPE) damage. Computed lesion width agreed with histological measurements in a wide range of pulse durations and power. Application of ring-shaped beam profile was predicted to double the therapeutic window width for exposures in the range of 1 - 10 ms.

  17. Mapping suitability areas for concentrated solar power plants using remote sensing data

    DOE PAGES

    Omitaomu, Olufemi A.; Singh, Nagendra; Bhaduri, Budhendra L.

    2015-05-14

    The political push to increase power generation from renewable sources such as solar energy requires knowing the best places to site new solar power plants with respect to the applicable regulatory, operational, engineering, environmental, and socioeconomic criteria. Therefore, in this paper, we present applications of remote sensing data for mapping suitability areas for concentrated solar power plants. Our approach uses digital elevation model derived from NASA s Shuttle Radar Topographic Mission (SRTM) at a resolution of 3 arc second (approx. 90m resolution) for estimating global solar radiation for the study area. Then, we develop a computational model built on amore » Geographic Information System (GIS) platform that divides the study area into a grid of cells and estimates site suitability value for each cell by computing a list of metrics based on applicable siting requirements using GIS data. The computed metrics include population density, solar energy potential, federal lands, and hazardous facilities. Overall, some 30 GIS data are used to compute eight metrics. The site suitability value for each cell is computed as an algebraic sum of all metrics for the cell with the assumption that all metrics have equal weight. Finally, we color each cell according to its suitability value. Furthermore, we present results for concentrated solar power that drives a stream turbine and parabolic mirror connected to a Stirling Engine.« less

  18. Comparative analysis of existing models for power-grid synchronization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takashi; Motter, Adilson E.

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations.

  19. Computed lateral rate and acceleration power spectral response of conventional and STOL airplanes to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Lichtenstein, J. H.

    1975-01-01

    Power-spectral-density calculations were made of the lateral responses to atmospheric turbulence for several conventional and short take-off and landing (STOL) airplanes. The turbulence was modeled as three orthogonal velocity components, which were uncorrelated, and each was represented with a one-dimensional power spectrum. Power spectral densities were computed for displacements, rates, and accelerations in roll, yaw, and sideslip. In addition, the power spectral density of the transverse acceleration was computed. Evaluation of ride quality based on a specific ride quality criterion was also made. The results show that the STOL airplanes generally had larger values for the rate and acceleration power spectra (and, consequently, larger corresponding root-mean-square values) than the conventional airplanes. The ride quality criterion gave poorer ratings to the STOL airplanes than to the conventional airplanes.

  20. An analytical procedure and automated computer code used to design model nozzles which meet MSFC base pressure similarity parameter criteria. [space shuttle

    NASA Technical Reports Server (NTRS)

    Sulyma, P. R.

    1980-01-01

    Fundamental equations and similarity definition and application are described as well as the computational steps of a computer program developed to design model nozzles for wind tunnel tests conducted to define power-on aerodynamic characteristics of the space shuttle over a range of ascent trajectory conditions. The computer code capabilities, a user's guide for the model nozzle design program, and the output format are examined. A program listing is included.

  1. Simulation of Power Collection Dynamics for Simply Supported Power Rail

    DOT National Transportation Integrated Search

    1972-11-01

    The mathematical model of a sprung mass moving along a simply supported beam is used to analyze the dynamics of a power-collection system. A computer simulation of one-dimensional motion is used to demonstrate the phenomenon of collector-power rail i...

  2. Model studies of laser absorption computed tomography for remote air pollution measurement

    NASA Technical Reports Server (NTRS)

    Wolfe, D. C., Jr.; Byer, R. L.

    1982-01-01

    Model studies of the potential of laser absorption-computed tomography are presented which demonstrate the possibility of sensitive remote atmospheric pollutant measurements, over kilometer-sized areas, with two-dimensional resolution, at modest laser source powers. An analysis of this tomographic reconstruction process as a function of measurement SNR, laser power, range, and system geometry, shows that the system is able to yield two-dimensional maps of pollutant concentrations at ranges and resolutions superior to those attainable with existing, direct-detection laser radars.

  3. Modeling and analysis of power processing systems: Feasibility investigation and formulation of a methodology

    NASA Technical Reports Server (NTRS)

    Biess, J. J.; Yu, Y.; Middlebrook, R. D.; Schoenfeld, A. D.

    1974-01-01

    A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks.

  4. Wind Farm Flow Modeling using an Input-Output Reduced-Order Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter

    Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used tomore » extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.« less

  5. System for computer controlled shifting of an automatic transmission

    DOEpatents

    Patil, Prabhakar B.

    1989-01-01

    In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determine from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

  6. Closed loop computer control for an automatic transmission

    DOEpatents

    Patil, Prabhakar B.

    1989-01-01

    In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determined from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

  7. Phase change energy storage for solar dynamic power systems

    NASA Technical Reports Server (NTRS)

    Chiaramonte, F. P.; Taylor, J. D.

    1992-01-01

    This paper presents the results of a transient computer simulation that was developed to study phase change energy storage techniques for Space Station Freedom (SSF) solar dynamic (SD) power systems. Such SD systems may be used in future growth SSF configurations. Two solar dynamic options are considered in this paper: Brayton and Rankine. Model elements consist of a single node receiver and concentrator, and takes into account overall heat engine efficiency and power distribution characteristics. The simulation not only computes the energy stored in the receiver phase change material (PCM), but also the amount of the PCM required for various combinations of load demands and power system mission constraints. For a solar dynamic power system in low earth orbit, the amount of stored PCM energy is calculated by balancing the solar energy input and the energy consumed by the loads corrected by an overall system efficiency. The model assumes an average 75 kW SD power system load profile which is connected to user loads via dedicated power distribution channels. The model then calculates the stored energy in the receiver and subsequently estimates the quantity of PCM necessary to meet peaking and contingency requirements. The model can also be used to conduct trade studies on the performance of SD power systems using different storage materials.

  8. Phase change energy storage for solar dynamic power systems

    NASA Astrophysics Data System (ADS)

    Chiaramonte, F. P.; Taylor, J. D.

    This paper presents the results of a transient computer simulation that was developed to study phase change energy storage techniques for Space Station Freedom (SSF) solar dynamic (SD) power systems. Such SD systems may be used in future growth SSF configurations. Two solar dynamic options are considered in this paper: Brayton and Rankine. Model elements consist of a single node receiver and concentrator, and takes into account overall heat engine efficiency and power distribution characteristics. The simulation not only computes the energy stored in the receiver phase change material (PCM), but also the amount of the PCM required for various combinations of load demands and power system mission constraints. For a solar dynamic power system in low earth orbit, the amount of stored PCM energy is calculated by balancing the solar energy input and the energy consumed by the loads corrected by an overall system efficiency. The model assumes an average 75 kW SD power system load profile which is connected to user loads via dedicated power distribution channels. The model then calculates the stored energy in the receiver and subsequently estimates the quantity of PCM necessary to meet peaking and contingency requirements. The model can also be used to conduct trade studies on the performance of SD power systems using different storage materials.

  9. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions

    NASA Astrophysics Data System (ADS)

    Donahue, William; Newhauser, Wayne D.; Ziegler, James F.

    2016-09-01

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u-1 to 450 MeV u-1 or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  10. Analytical model for ion stopping power and range in the therapeutic energy interval for beams of hydrogen and heavier ions.

    PubMed

    Donahue, William; Newhauser, Wayne D; Ziegler, James F

    2016-09-07

    Many different approaches exist to calculate stopping power and range of protons and heavy charged particles. These methods may be broadly categorized as physically complete theories (widely applicable and complex) or semi-empirical approaches (narrowly applicable and simple). However, little attention has been paid in the literature to approaches that are both widely applicable and simple. We developed simple analytical models of stopping power and range for ions of hydrogen, carbon, iron, and uranium that spanned intervals of ion energy from 351 keV u(-1) to 450 MeV u(-1) or wider. The analytical models typically reproduced the best-available evaluated stopping powers within 1% and ranges within 0.1 mm. The computational speed of the analytical stopping power model was 28% faster than a full-theoretical approach. The calculation of range using the analytic range model was 945 times faster than a widely-used numerical integration technique. The results of this study revealed that the new, simple analytical models are accurate, fast, and broadly applicable. The new models require just 6 parameters to calculate stopping power and range for a given ion and absorber. The proposed model may be useful as an alternative to traditional approaches, especially in applications that demand fast computation speed, small memory footprint, and simplicity.

  11. Application of the aeroacoustic analogy to a shrouded, subsonic, radial fan

    NASA Astrophysics Data System (ADS)

    Buccieri, Bryan M.; Richards, Christopher M.

    2016-12-01

    A study was conducted to investigate the predictive capability of computational aeroacoustics with respect to a shrouded, subsonic, radial fan. A three dimensional unsteady fluid dynamics simulation was conducted to produce aerodynamic data used as the acoustic source for an aeroacoustics simulation. Two acoustic models were developed: one modeling the forces on the rotating fan blades as a set of rotating dipoles located at the center of mass of each fan blade and one modeling the forces on the stationary fan shroud as a field of distributed stationary dipoles. Predicted acoustic response was compared to experimental data measured at two operating speeds using three different outlet restrictions. The blade source model predicted overall far field sound power levels within 5 dB averaged over the six different operating conditions while the shroud model predicted overall far field sound power levels within 7 dB averaged over the same conditions. Doubling the density of the computational fluids mesh and using a scale adaptive simulation turbulence model increased broadband noise accuracy. However, computation time doubled and the accuracy of the overall sound power level prediction improved by only 1 dB.

  12. Modeling Cross-Situational Word–Referent Learning: Prior Questions

    PubMed Central

    Yu, Chen; Smith, Linda B.

    2013-01-01

    Both adults and young children possess powerful statistical computation capabilities—they can infer the referent of a word from highly ambiguous contexts involving many words and many referents by aggregating cross-situational statistical information across contexts. This ability has been explained by models of hypothesis testing and by models of associative learning. This article describes a series of simulation studies and analyses designed to understand the different learning mechanisms posited by the 2 classes of models and their relation to each other. Variants of a hypothesis-testing model and a simple or dumb associative mechanism were examined under different specifications of information selection, computation, and decision. Critically, these 3 components of the models interact in complex ways. The models illustrate a fundamental tradeoff between amount of data input and powerful computations: With the selection of more information, dumb associative models can mimic the powerful learning that is accomplished by hypothesis-testing models with fewer data. However, because of the interactions among the component parts of the models, the associative model can mimic various hypothesis-testing models, producing the same learning patterns but through different internal components. The simulations argue for the importance of a compositional approach to human statistical learning: the experimental decomposition of the processes that contribute to statistical learning in human learners and models with the internal components that can be evaluated independently and together. PMID:22229490

  13. Combining Computational Fluid Dynamics and Agent-Based Modeling: A New Approach to Evacuation Planning

    PubMed Central

    Epstein, Joshua M.; Pankajakshan, Ramesh; Hammond, Ross A.

    2011-01-01

    We introduce a novel hybrid of two fields—Computational Fluid Dynamics (CFD) and Agent-Based Modeling (ABM)—as a powerful new technique for urban evacuation planning. CFD is a predominant technique for modeling airborne transport of contaminants, while ABM is a powerful approach for modeling social dynamics in populations of adaptive individuals. The hybrid CFD-ABM method is capable of simulating how large, spatially-distributed populations might respond to a physically realistic contaminant plume. We demonstrate the overall feasibility of CFD-ABM evacuation design, using the case of a hypothetical aerosol release in Los Angeles to explore potential effectiveness of various policy regimes. We conclude by arguing that this new approach can be powerfully applied to arbitrary population centers, offering an unprecedented preparedness and catastrophic event response tool. PMID:21687788

  14. COMPUTER MODELS/EPANET

    EPA Science Inventory

    Pipe network flow analysis was among the first civil engineering applications programmed for solution on the early commercial mainframe computers in the 1960s. Since that time, advancements in analytical techniques and computing power have enabled us to solve systems with tens o...

  15. Power flow prediction in vibrating systems via model reduction

    NASA Astrophysics Data System (ADS)

    Li, Xianhui

    This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.

  16. Computational modeling of cardiac hemodynamics: Current status and future outlook

    NASA Astrophysics Data System (ADS)

    Mittal, Rajat; Seo, Jung Hee; Vedula, Vijay; Choi, Young J.; Liu, Hang; Huang, H. Howie; Jain, Saurabh; Younes, Laurent; Abraham, Theodore; George, Richard T.

    2016-01-01

    The proliferation of four-dimensional imaging technologies, increasing computational speeds, improved simulation algorithms, and the widespread availability of powerful computing platforms is enabling simulations of cardiac hemodynamics with unprecedented speed and fidelity. Since cardiovascular disease is intimately linked to cardiovascular hemodynamics, accurate assessment of the patient's hemodynamic state is critical for the diagnosis and treatment of heart disease. Unfortunately, while a variety of invasive and non-invasive approaches for measuring cardiac hemodynamics are in widespread use, they still only provide an incomplete picture of the hemodynamic state of a patient. In this context, computational modeling of cardiac hemodynamics presents as a powerful non-invasive modality that can fill this information gap, and significantly impact the diagnosis as well as the treatment of cardiac disease. This article reviews the current status of this field as well as the emerging trends and challenges in cardiovascular health, computing, modeling and simulation and that are expected to play a key role in its future development. Some recent advances in modeling and simulations of cardiac flow are described by using examples from our own work as well as the research of other groups.

  17. Initial comparison of single cylinder Stirling engine computer model predictions with test results

    NASA Technical Reports Server (NTRS)

    Tew, R. C., Jr.; Thieme, L. G.; Miao, D.

    1979-01-01

    A NASA developed digital computer code for a Stirling engine, modelling the performance of a single cylinder rhombic drive ground performance unit (GPU), is presented and its predictions are compared to test results. The GPU engine incorporates eight regenerator/cooler units and the engine working space is modelled by thirteen control volumes. The model calculates indicated power and efficiency for a given engine speed, mean pressure, heater and expansion space metal temperatures and cooler water inlet temperature and flow rate. Comparison of predicted and observed powers implies that the reference pressure drop calculations underestimate actual pressure drop, possibly due to oil contamination in the regenerator/cooler units, methane contamination in the working gas or the underestimation of mechanical loss. For a working gas of hydrogen, the predicted values of brake power are from 0 to 6% higher than experimental values, and brake efficiency is 6 to 16% higher, while for helium the predicted brake power and efficiency are 2 to 15% higher than the experimental.

  18. Guest Editorial High Performance Computing (HPC) Applications for a More Resilient and Efficient Power Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu Henry; Tate, Zeb; Abhyankar, Shrirang

    The power grid has been evolving over the last 120 years, but it is seeing more changes in this decade and next than it has seen over the past century. In particular, the widespread deployment of intermittent renewable generation, smart loads and devices, hierarchical and distributed control technologies, phasor measurement units, energy storage, and widespread usage of electric vehicles will require fundamental changes in methods and tools for the operation and planning of the power grid. The resulting new dynamic and stochastic behaviors will demand the inclusion of more complexity in modeling the power grid. Solving such complex models inmore » the traditional computing environment will be a major challenge. Along with the increasing complexity of power system models, the increasing complexity of smart grid data further adds to the prevailing challenges. In this environment, the myriad of smart sensors and meters in the power grid increase by multiple orders of magnitude, so do the volume and speed of the data. The information infrastructure will need to drastically change to support the exchange of enormous amounts of data as smart grid applications will need the capability to collect, assimilate, analyze and process the data, to meet real-time grid functions. High performance computing (HPC) holds the promise to enhance these functions, but it is a great resource that has not been fully explored and adopted for the power grid domain.« less

  19. Magnetic Flux Compression Using Detonation Plasma Armatures and Superconductor Stators: Integrated Propulsion and Power Applications

    NASA Technical Reports Server (NTRS)

    Litchford, Ron; Robertson, Tony; Hawk, Clark; Turner, Matt; Koelfgen, Syri

    1999-01-01

    This presentation discusses the use of magnetic flux compression for space flight applications as a propulsion and other power applications. The qualities of this technology that make it suitable for spaceflight propulsion and power, are that it has high power density, it can give multimegawatt energy bursts, and terawatt power bursts, it can produce the pulse power for low impedance dense plasma devices (e.g., pulse fusion drivers), and it can produce direct thrust. The issues of a metal vs plasma armature are discussed, and the requirements for high energy output, and fast pulse rise time requires a high speed armature. The plasma armature enables repetitive firing capabilities. The issues concerning the high temperature superconductor stator are also discussed. The concept of the radial mode pulse power generator is described. The proposed research strategy combines the use of computational modeling (i.e., magnetohydrodynamic computations, and finite element modeling) and laboratory experiments to create a demonstration device.

  20. Optimization Scheduling Model for Wind-thermal Power System Considering the Dynamic penalty factor

    NASA Astrophysics Data System (ADS)

    PENG, Siyu; LUO, Jianchun; WANG, Yunyu; YANG, Jun; RAN, Hong; PENG, Xiaodong; HUANG, Ming; LIU, Wanyu

    2018-03-01

    In this paper, a new dynamic economic dispatch model for power system is presented.Objective function of the proposed model presents a major novelty in the dynamic economic dispatch including wind farm: introduced the “Dynamic penalty factor”, This factor could be computed by using fuzzy logic considering both the variable nature of active wind power and power demand, and it could change the wind curtailment cost according to the different state of the power system. Case studies were carried out on the IEEE30 system. Results show that the proposed optimization model could mitigate the wind curtailment and the total cost effectively, demonstrate the validity and effectiveness of the proposed model.

  1. The Use of High Performance Computing (HPC) to Strengthen the Development of Army Systems

    DTIC Science & Technology

    2011-11-01

    accurately predicting the supersonic magus effect about spinning cones, ogive- cylinders , and boat-tailed afterbodies. This work led to the successful...successful computer model of the proposed product or system, one can then build prototypes on the computer and study the effects on the performance of...needed. The NRC report discusses the requirements for effective use of such computing power. One needs “models, algorithms, software, hardware

  2. A computational modeling approach of the jet-like acoustic streaming and heat generation induced by low frequency high power ultrasonic horn reactors.

    PubMed

    Trujillo, Francisco Javier; Knoerzer, Kai

    2011-11-01

    High power ultrasound reactors have gained a lot of interest in the food industry given the effects that can arise from ultrasonic-induced cavitation in liquid foods. However, most of the new food processing developments have been based on empirical approaches. Thus, there is a need for mathematical models which help to understand, optimize, and scale up ultrasonic reactors. In this work, a computational fluid dynamics (CFD) model was developed to predict the acoustic streaming and induced heat generated by an ultrasonic horn reactor. In the model it is assumed that the horn tip is a fluid inlet, where a turbulent jet flow is injected into the vessel. The hydrodynamic momentum rate of the incoming jet is assumed to be equal to the total acoustic momentum rate emitted by the acoustic power source. CFD velocity predictions show excellent agreement with the experimental data for power densities higher than W(0)/V ≥ 25kWm(-3). This model successfully describes hydrodynamic fields (streaming) generated by low-frequency-high-power ultrasound. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  3. Computational effects of inlet representation on powered hypersonic, airbreathing models

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Tatum, Kenneth E.

    1993-01-01

    Computational results are presented to illustrate the powered aftbody effects of representing the scramjet inlet on a generic hypersonic vehicle with a fairing, to divert the external flow, as compared to an operating flow-through scramjet inlet. This study is pertinent to the ground testing of hypersonic, airbreathing models employing scramjet exhaust flow simulation in typical small-scale hypersonic wind tunnels. The comparison of aftbody effects due to inlet representation is well-suited for computational study, since small model size typically precludes the ability to ingest flow into the inlet and perform exhaust simulation at the same time. Two-dimensional analysis indicates that, although flowfield differences exist for the two types of inlet representations, little, if any, difference in surface aftbody characteristics is caused by fairing over the inlet.

  4. Evidence Theory Based Uncertainty Quantification in Radiological Risk due to Accidental Release of Radioactivity from a Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Ingale, S. V.; Datta, D.

    2010-10-01

    Consequence of the accidental release of radioactivity from a nuclear power plant is assessed in terms of exposure or dose to the members of the public. Assessment of risk is routed through this dose computation. Dose computation basically depends on the basic dose assessment model and exposure pathways. One of the exposure pathways is the ingestion of contaminated food. The aim of the present paper is to compute the uncertainty associated with the risk to the members of the public due to the ingestion of contaminated food. The governing parameters of the ingestion dose assessment model being imprecise, we have approached evidence theory to compute the bound of the risk. The uncertainty is addressed by the belief and plausibility fuzzy measures.

  5. An EMTP system level model of the PMAD DC test bed

    NASA Technical Reports Server (NTRS)

    Dravid, Narayan V.; Kacpura, Thomas J.; Tam, Kwa-Sur

    1991-01-01

    A power management and distribution direct current (PMAD DC) test bed was set up at the NASA Lewis Research Center to investigate Space Station Freedom Electric Power Systems issues. Efficiency of test bed operation significantly improves with a computer simulation model of the test bed as an adjunct tool of investigation. Such a model is developed using the Electromagnetic Transients Program (EMTP) and is available to the test bed developers and experimenters. The computer model is assembled on a modular basis. Device models of different types can be incorporated into the system model with only a few lines of code. A library of the various model types is created for this purpose. Simulation results and corresponding test bed results are presented to demonstrate model validity.

  6. Computer-Based Experiment for Determining Planck's Constant Using LEDs

    ERIC Educational Resources Information Center

    Zhou, Feng; Cloninger, Todd

    2008-01-01

    Visible light emitting diodes (LEDs) have been widely used as power indicators. However, after the power is switched off, it takes a while for the LED to go off. Many students were fascinated by this simple demonstration. In this paper, by making use of computer-based data acquisition and modeling, we show the voltage across the LED undergoing an…

  7. WMAP7 constraints on oscillations in the primordial power spectrum

    NASA Astrophysics Data System (ADS)

    Meerburg, P. Daniel; Wijers, Ralph A. M. J.; van der Schaar, Jan Pieter

    2012-03-01

    We use the 7-year Wilkinson Microwave Anisotropy Probe (WMAP7) data to place constraints on oscillations supplementing an almost scale-invariant primordial power spectrum. Such oscillations are predicted by a variety of models, some of which amount to assuming that there is some non-trivial choice of the vacuum state at the onset of inflation. In this paper, we will explore data-driven constraints on two distinct models of initial state modifications. In both models, the frequency, phase and amplitude are degrees of freedom of the theory for which the theoretical bounds are rather weak: both the amplitude and frequency have allowed values ranging over several orders of magnitude. This requires many computationally expensive evaluations of the model cosmic microwave background (CMB) spectra and their goodness of fit, even in a Markov chain Monte Carlo (MCMC), normally the most efficient fitting method for such a problem. To search more efficiently, we first run a densely-spaced grid, with only three varying parameters: the frequency, the amplitude and the baryon density. We obtain the optimal frequency and run an MCMC at the best-fitting frequency, randomly varying all other relevant parameters. To reduce the computational time of each power spectrum computation, we adjust both comoving momentum integration and spline interpolation (in l) as a function of frequency and amplitude of the primordial power spectrum. Applying this to the WMAP7 data allows us to improve existing constraints on the presence of oscillations. We confirm earlier findings that certain frequencies can improve the fitting over a model without oscillations. For those frequencies we compute the posterior probability, allowing us to put some constraints on the primordial parameter space of both models.

  8. Using NCAR Yellowstone for PhotoVoltaic Power Forecasts with Artificial Neural Networks and an Analog Ensemble

    NASA Astrophysics Data System (ADS)

    Cervone, G.; Clemente-Harding, L.; Alessandrini, S.; Delle Monache, L.

    2016-12-01

    A methodology based on Artificial Neural Networks (ANN) and an Analog Ensemble (AnEn) is presented to generate 72-hour deterministic and probabilistic forecasts of power generated by photovoltaic (PV) power plants using input from a numerical weather prediction model and computed astronomical variables. ANN and AnEn are used individually and in combination to generate forecasts for three solar power plant located in Italy. The computational scalability of the proposed solution is tested using synthetic data simulating 4,450 PV power stations. The NCAR Yellowstone supercomputer is employed to test the parallel implementation of the proposed solution, ranging from 1 node (32 cores) to 4,450 nodes (141,140 cores). Results show that a combined AnEn + ANN solution yields best results, and that the proposed solution is well suited for massive scale computation.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Jih-Sheng

    This paper introduces control system design based softwares, SIMNON and MATLAB/SIMULINK, for power electronics system simulation. A complete power electronics system typically consists of a rectifier bridge along with its smoothing capacitor, an inverter, and a motor. The system components, featuring discrete or continuous, linear or nonlinear, are modeled in mathematical equations. Inverter control methods,such as pulse-width-modulation and hysteresis current control, are expressed in either computer algorithms or digital circuits. After describing component models and control methods, computer programs are then developed for complete systems simulation. Simulation results are mainly used for studying system performances, such as input and outputmore » current harmonics, torque ripples, and speed responses. Key computer programs and simulation results are demonstrated for educational purposes.« less

  10. The super-Turing computational power of plastic recurrent neural networks.

    PubMed

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  11. Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic

    NASA Astrophysics Data System (ADS)

    Narendran, S.; Selvakumar, J.

    2018-04-01

    Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.

  12. High-frequency AC/DC converter with unity power factor and minimum harmonic distortion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wernekinch, E.R.

    1987-01-01

    The power factor is controlled by adjusting the relative position of the fundamental component of an optimized PWM-type voltage with respect to the supply voltage. Current harmonic distortion is minimized by the use of optimized firing angles for the converter at a frequency where GTO's can be used. This feature makes this approach very attractive at power levels of 100 to 600 kW. To obtain the optimized PWM pattern, a steepest descent digital computer algorithm is used. Digital-computer simulations are performed and a low-power model is constructed and tested to verify the concepts and the behavior of the model. Experimentalmore » results show that unity power factor is achieved and that the distortion in the phase currents is 10.4% at 90% of full load. This is less than achievable with sinusoidal PWM, harmonic elimination, hysteresis control, and deadbeat control for the same switching frequency.« less

  13. A Smoothed Eclipse Model for Solar Electric Propulsion Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Aziz, Jonathan D.; Scheeres, Daniel J.; Parker, Jeffrey S.; Englander, Jacob A.

    2017-01-01

    Solar electric propulsion (SEP) is the dominant design option for employing low-thrust propulsion on a space mission. Spacecraft solar arrays power the SEP system but are subject to blackout periods during solar eclipse conditions. Discontinuity in power available to the spacecraft must be accounted for in trajectory optimization, but gradient-based methods require a differentiable power model. This work presents a power model that smooths the eclipse transition from total eclipse to total sunlight with a logistic function. Example trajectories are computed with differential dynamic programming, a second-order gradient-based method.

  14. Complexity Bounds for Quantum Computation

    DTIC Science & Technology

    2007-06-22

    Programs Trustees of Boston University Boston, MA 02215 - Complexity Bounds for Quantum Computation REPORT DOCUMENTATION PAGE 18. SECURITY CLASSIFICATION...Complexity Bounds for Quantum Comp[utation Report Title ABSTRACT This project focused on upper and lower bounds for quantum computability using constant...classical computation models, particularly emphasizing new examples of where quantum circuits are more powerful than their classical counterparts. A second

  15. Turbulent heat transfer performance of single stage turbine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amano, R.S.; Song, B.

    1999-07-01

    To increase the efficiency and the power of modern power plant gas turbines, designers are continually trying to raise the maximum turbine inlet temperature. Here, a numerical study based on the Navier-Stokes equations on a three-dimensional turbulent flow in a single stage turbine stator/rotor passage has been conducted and reported in this paper. The full Reynolds-stress closure model (RSM) was used for the computations and the results were also compared with the computations made by using the Launder-Sharma low-Reynolds-number {kappa}-{epsilon} model. The computational results obtained using these models were compared in order to investigate the turbulence effect in the near-wallmore » region. The set of the governing equations in a generalized curvilinear coordinate system was discretized by using the finite volume method with non-staggered grids. The numerical modeling was performed to interact between the stator and rotor blades.« less

  16. galario: Gpu Accelerated Library for Analyzing Radio Interferometer Observations

    NASA Astrophysics Data System (ADS)

    Tazzari, Marco; Beaujean, Frederik; Testi, Leonardo

    2017-10-01

    The galario library exploits the computing power of modern graphic cards (GPUs) to accelerate the comparison of model predictions to radio interferometer observations. It speeds up the computation of the synthetic visibilities given a model image (or an axisymmetric brightness profile) and their comparison to the observations.

  17. The Use of Computer Simulation Techniques in Educational Planning.

    ERIC Educational Resources Information Center

    Wilson, Charles Z.

    Computer simulations provide powerful models for establishing goals, guidelines, and constraints in educational planning. They are dynamic models that allow planners to examine logical descriptions of organizational behavior over time as well as permitting consideration of the large and complex systems required to provide realistic descriptions of…

  18. The Impact of Iranian Teachers Cultural Values on Computer Technology Acceptance

    ERIC Educational Resources Information Center

    Sadeghi, Karim; Saribagloo, Javad Amani; Aghdam, Samad Hanifepour; Mahmoudi, Hojjat

    2014-01-01

    This study was conducted with the aim of testing the technology acceptance model and the impact of Hofstede cultural values (masculinity/femininity, uncertainty avoidance, individualism/collectivism, and power distance) on computer technology acceptance among teachers at Urmia city (Iran) using the structural equation modeling approach. From among…

  19. Energy Use and Power Levels in New Monitors and Personal Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay

    2002-07-23

    Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can usemore » to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC). Cur rent ENERGY STAR monitor and computer criteria do not specify off or on power, but our results suggest opportunities for saving energy in these modes. Also, significant differences between CRT and LCD technology, and between field-measured and manufacturer-reported power levels reveal the need for standard methods and metrics for measuring and comparing monitor power consumption.« less

  20. New 2D diffraction model and its applications to terahertz parallel-plate waveguide power splitters

    PubMed Central

    Zhang, Fan; Song, Kaijun; Fan, Yong

    2017-01-01

    A two-dimensional (2D) diffraction model for the calculation of the diffraction field in 2D space and its applications to terahertz parallel-plate waveguide power splitters are proposed in this paper. Compared with the Huygens-Fresnel principle in three-dimensional (3D) space, the proposed model provides an approximate analytical expression to calculate the diffraction field in 2D space. The diffraction filed is regarded as the superposition integral in 2D space. The calculated results obtained from the proposed diffraction model agree well with the ones by software HFSS based on the element method (FEM). Based on the proposed 2D diffraction model, two parallel-plate waveguide power splitters are presented. The splitters consist of a transmitting horn antenna, reflectors, and a receiving antenna array. The reflector is cylindrical parabolic with superimposed surface relief to efficiently couple the transmitted wave into the receiving antenna array. The reflector is applied as computer-generated holograms to match the transformed field to the receiving antenna aperture field. The power splitters were optimized by a modified real-coded genetic algorithm. The computed results of the splitters agreed well with the ones obtained by software HFSS verify the novel design method for power splitter, which shows good applied prospects of the proposed 2D diffraction model. PMID:28181514

  1. Asymmetric Base-Bleed Effect on Aerospike Plume-Induced Base-Heating Environment

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Droege, Alan; DAgostino, Mark; Lee, Young-Ching; Williams, Robert

    2004-01-01

    A computational heat transfer design methodology was developed to study the dual-engine linear aerospike plume-induced base-heating environment during one power-pack out, in ascent flight. It includes a three-dimensional, finite volume, viscous, chemically reacting, and pressure-based computational fluid dynamics formulation, a special base-bleed boundary condition, and a three-dimensional, finite volume, and spectral-line-based weighted-sum-of-gray-gases absorption computational radiation heat transfer formulation. A separate radiation model was used for diagnostic purposes. The computational methodology was systematically benchmarked. In this study, near-base radiative heat fluxes were computed, and they compared well with those measured during static linear aerospike engine tests. The base-heating environment of 18 trajectory points selected from three power-pack out scenarios was computed. The computed asymmetric base-heating physics were analyzed. The power-pack out condition has the most impact on convective base heating when it happens early in flight. The source of its impact comes from the asymmetric and reduced base bleed.

  2. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    PubMed

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Computational fluid dynamics study on mixing mode and power consumption in anaerobic mono- and co-digestion.

    PubMed

    Zhang, Yuan; Yu, Guangren; Yu, Liang; Siddhu, Muhammad Abdul Hanan; Gao, Mengjiao; Abdeltawab, Ahmed A; Al-Deyab, Salem S; Chen, Xiaochun

    2016-03-01

    Computational fluid dynamics (CFD) was applied to investigate mixing mode and power consumption in anaerobic mono- and co-digestion. Cattle manure (CM) and corn stover (CS) were used as feedstock and stirred tank reactor (STR) was used as digester. Power numbers obtained by the CFD simulation were compared with those from the experimental correlation. Results showed that the standard k-ε model was more appropriate than other turbulence models. A new index, net power production instead of gas production, was proposed to optimize feedstock ratio for anaerobic co-digestion. Results showed that flow field and power consumption were significantly changed in co-digestion of CM and CS compared with those in mono-digestion of either CM or CS. For different mixing modes, the optimum feedstock ratio for co-digestion changed with net power production. The best option of CM/CS ratio for continuous mixing, intermittent mixing I, and intermittent mixing II were 1:1, 1:1 and 1:3, respectively. Copyright © 2016. Published by Elsevier Ltd.

  4. High performance TWT development for the microwave power module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whaley, D.R.; Armstrong, C.M.; Groshart, G.

    1996-12-31

    Northrop Grumman`s ongoing development of microwave power modules (MPM) provides microwave power at various power levels, frequencies, and bandwidths for a variety of applications. Present day requirements for the vacuum power booster traveling wave tubes of the microwave power module are becoming increasingly more demanding, necessitating the need for further enhancement of tube performance. The MPM development program at Northrop Grumman is designed specifically to meet this need by construction and test of a series of new tubes aimed at verifying computation and reaching high efficiency design goals. Tubes under test incorporate several different helix designs, as well as varyingmore » electron gun and magnetic confinement configurations. Current efforts also include further development of state-of-the-art TWT modeling and computational methods at Northrop Grumman incorporating new, more accurate models into existing design tools and developing new tools to be used in all aspects of traveling wave tube design. Current status of the Northrop Grumman MPM TWT development program will be presented.« less

  5. Computational and experimental aftbody flow fields for hypersonic, airbreathing configurations with scramjet exhaust flow simulation

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Tatum, Kenneth E.

    1991-01-01

    Computational results are presented for three issues pertinent to hypersonic, airbreathing vehicles employing scramjet exhaust flow simulation. The first issue consists of a comparison of schlieren photographs obtained on the aftbody of a cruise missile configuration under powered conditions with two-dimensional computational solutions. The second issue presents the powered aftbody effects of modeling the inlet with a fairing to divert the external flow as compared to an operating flow-through inlet on a generic hypersonic vehicle. Finally, a comparison of solutions examining the potential of testing powered configurations in a wind-off, instead of a wind-on, environment, indicate that, depending on the extent of the three-dimensional plume, it may be possible to test aftbody powered hypersonic, airbreathing configurations in a wind-off environment.

  6. Evaluation of concentrated space solar arrays using computer modeling. [for spacecraft propulsion and power supplies

    NASA Technical Reports Server (NTRS)

    Rockey, D. E.

    1979-01-01

    A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.

  7. A neural network based computational model to predict the output power of different types of photovoltaic cells.

    PubMed

    Xiao, WenBo; Nazario, Gina; Wu, HuaMing; Zhang, HuaMing; Cheng, Feng

    2017-01-01

    In this article, we introduced an artificial neural network (ANN) based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-), multi-crystalline (multi-), and amorphous (amor-) crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.

  8. Computer-aided modeling and prediction of performance of the modified Lundell class of alternators in space station solar dynamic power systems

    NASA Technical Reports Server (NTRS)

    Demerdash, Nabeel A. O.; Wang, Ren-Hong

    1988-01-01

    The main purpose of this project is the development of computer-aided models for purposes of studying the effects of various design changes on the parameters and performance characteristics of the modified Lundell class of alternators (MLA) as components of a solar dynamic power system supplying electric energy needs in the forthcoming space station. Key to this modeling effort is the computation of magnetic field distribution in MLAs. Since the nature of the magnetic field is three-dimensional, the first step in the investigation was to apply the finite element method to discretize volume, using the tetrahedron as the basic 3-D element. Details of the stator 3-D finite element grid are given. A preliminary look at the early stage of a 3-D rotor grid is presented.

  9. Verification of Space Station Secondary Power System Stability Using Design of Experiment

    NASA Technical Reports Server (NTRS)

    Karimi, Kamiar J.; Booker, Andrew J.; Mong, Alvin C.; Manners, Bruce

    1998-01-01

    This paper describes analytical methods used in verification of large DC power systems with applications to the International Space Station (ISS). Large DC power systems contain many switching power converters with negative resistor characteristics. The ISS power system presents numerous challenges with respect to system stability such as complex sources and undefined loads. The Space Station program has developed impedance specifications for sources and loads. The overall approach to system stability consists of specific hardware requirements coupled with extensive system analysis and testing. Testing of large complex distributed power systems is not practical due to size and complexity of the system. Computer modeling has been extensively used to develop hardware specifications as well as to identify system configurations for lab testing. The statistical method of Design of Experiments (DoE) is used as an analysis tool for verification of these large systems. DOE reduces the number of computer runs which are necessary to analyze the performance of a complex power system consisting of hundreds of DC/DC converters. DoE also provides valuable information about the effect of changes in system parameters on the performance of the system. DoE provides information about various operating scenarios and identification of the ones with potential for instability. In this paper we will describe how we have used computer modeling to analyze a large DC power system. A brief description of DoE is given. Examples using applications of DoE to analysis and verification of the ISS power system are provided.

  10. Why Won't You Do What I Want? The Informative Failures of Children and Models

    ERIC Educational Resources Information Center

    Chatham, Christopher H.; Yerys, Benjamin E.; Munakata, Yuko

    2012-01-01

    Computational models are powerful tools--too powerful, according to some. We argue that the idea that models can "do anything" is wrong, and we describe how their failures have been informative. We present new work showing surprising diversity in the effects of feedback on children's task-switching, such that some children perseverate despite this…

  11. Applicability of mathematical modeling to problems of environmental physiology

    NASA Technical Reports Server (NTRS)

    White, Ronald J.; Lujan, Barbara F.; Leonard, Joel I.; Srinivasan, R. Srini

    1988-01-01

    The paper traces the evolution of mathematical modeling and systems analysis from terrestrial research to research related to space biomedicine and back again to terrestrial research. Topics covered include: power spectral analysis of physiological signals; pattern recognition models for detection of disease processes; and, computer-aided diagnosis programs used in conjunction with a special on-line biomedical computer library.

  12. Experimental validation of convection-diffusion discretisation scheme employed for computational modelling of biological mass transport

    PubMed Central

    2010-01-01

    Background The finite volume solver Fluent (Lebanon, NH, USA) is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results. Methods An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK) schemes. The fluid has a diffusivity of 3.125 × 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow. Results The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140% and 116% was demonstrated between the experimental results and those obtained from the First-Order Upwind and Power Law schemes, respectively. However, both the Second-Order upwind and QUICK schemes accurately predict species concentration under high Peclet number, convection-dominated flow conditions. Conclusion Convection-diffusion discretisation scheme selection has a strong influence on resultant species concentration fields, as determined by CFD. Furthermore, either the Second-Order or QUICK discretisation schemes should be implemented when numerically modelling convection-dominated mass-transport conditions. Finally, care should be taken not to utilize computationally inexpensive discretisation schemes at the cost of accuracy in resultant species concentration. PMID:20642816

  13. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potok, Thomas E; Schuman, Catherine D; Young, Steven R

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less

  14. Evaluating the effects of real power losses in optimal power flow based storage integration

    DOE PAGES

    Castillo, Anya; Gayme, Dennice

    2017-03-27

    This study proposes a DC optimal power flow (DCOPF) with losses formulation (the `-DCOPF+S problem) and uses it to investigate the role of real power losses in OPF based grid-scale storage integration. We derive the `- DCOPF+S problem by augmenting a standard DCOPF with storage (DCOPF+S) problem to include quadratic real power loss approximations. This procedure leads to a multi-period nonconvex quadratically constrained quadratic program, which we prove can be solved to optimality using either a semidefinite or second order cone relaxation. Our approach has some important benefits over existing models. It is more computationally tractable than ACOPF with storagemore » (ACOPF+S) formulations and the provably exact convex relaxations guarantee that an optimal solution can be attained for a feasible problem. Adding loss approximations to a DCOPF+S model leads to a more accurate representation of locational marginal prices, which have been shown to be critical to determining optimal storage dispatch and siting in prior ACOPF+S based studies. Case studies demonstrate the improved accuracy of the `-DCOPF+S model over a DCOPF+S model and the computational advantages over an ACOPF+S formulation.« less

  15. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, D.; Alfonsi, A.; Talbot, P.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less

  16. Computational Modeling of Open-Irrigated Electrodes for Radiofrequency Cardiac Ablation Including Blood Motion-Saline Flow Interaction

    PubMed Central

    González-Suárez, Ana; Berjano, Enrique; Guerra, Jose M.; Gerardo-Giorda, Luca

    2016-01-01

    Radiofrequency catheter ablation (RFCA) is a routine treatment for cardiac arrhythmias. During RFCA, the electrode-tissue interface temperature should be kept below 80°C to avoid thrombus formation. Open-irrigated electrodes facilitate power delivery while keeping low temperatures around the catheter. No computational model of an open-irrigated electrode in endocardial RFCA accounting for both the saline irrigation flow and the blood motion in the cardiac chamber has been proposed yet. We present the first computational model including both effects at once. The model has been validated against existing experimental results. Computational results showed that the surface lesion width and blood temperature are affected by both the electrode design and the irrigation flow rate. Smaller surface lesion widths and blood temperatures are obtained with higher irrigation flow rate, while the lesion depth is not affected by changing the irrigation flow rate. Larger lesions are obtained with increasing power and the electrode-tissue contact. Also, larger lesions are obtained when electrode is placed horizontally. Overall, the computational findings are in close agreement with previous experimental results providing an excellent tool for future catheter research. PMID:26938638

  17. The Role of Energy Reservoirs in Distributed Computing: Manufacturing, Implementing, and Optimizing Energy Storage in Energy-Autonomous Sensor Nodes

    NASA Astrophysics Data System (ADS)

    Cowell, Martin Andrew

    The world already hosts more internet connected devices than people, and that ratio is only increasing. These devices seamlessly integrate with peoples lives to collect rich data and give immediate feedback about complex systems from business, health care, transportation, and security. As every aspect of global economies integrate distributed computing into their industrial systems and these systems benefit from rich datasets. Managing the power demands of these distributed computers will be paramount to ensure the continued operation of these networks, and is elegantly addressed by including local energy harvesting and storage on a per-node basis. By replacing non-rechargeable batteries with energy harvesting, wireless sensor nodes will increase their lifetimes by an order of magnitude. This work investigates the coupling of high power energy storage with energy harvesting technologies to power wireless sensor nodes; with sections covering device manufacturing, system integration, and mathematical modeling. First we consider the energy storage mechanism of supercapacitors and batteries, and identify favorable characteristics in both reservoir types. We then discuss experimental methods used to manufacture high power supercapacitors in our labs. We go on to detail the integration of our fabricated devices with collaborating labs to create functional sensor node demonstrations. With the practical knowledge gained through in-lab manufacturing and system integration, we build mathematical models to aid in device and system design. First, we model the mechanism of energy storage in porous graphene supercapacitors to aid in component architecture optimization. We then model the operation of entire sensor nodes for the purpose of optimally sizing the energy harvesting and energy reservoir components. In consideration of deploying these sensor nodes in real-world environments, we model the operation of our energy harvesting and power management systems subject to spatially and temporally varying energy availability in order to understand sensor node reliability. Looking to the future, we see an opportunity for further research to implement machine learning algorithms to control the energy resources of distributed computing networks.

  18. Computational nanomedicine: modeling of nanoparticle-mediated hyperthermal cancer therapy

    PubMed Central

    Kaddi, Chanchala D; Phan, John H; Wang, May D

    2016-01-01

    Nanoparticle-mediated hyperthermia for cancer therapy is a growing area of cancer nanomedicine because of the potential for localized and targeted destruction of cancer cells. Localized hyperthermal effects are dependent on many factors, including nanoparticle size and shape, excitation wavelength and power, and tissue properties. Computational modeling is an important tool for investigating and optimizing these parameters. In this review, we focus on computational modeling of magnetic and gold nanoparticle-mediated hyperthermia, followed by a discussion of new opportunities and challenges. PMID:23914967

  19. Laser Powered Launch Vehicle Performance Analyses

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Liu, Jiwen; Wang, Ten-See (Technical Monitor)

    2001-01-01

    The purpose of this study is to establish the technical ground for modeling the physics of laser powered pulse detonation phenomenon. Laser powered propulsion systems involve complex fluid dynamics, thermodynamics and radiative transfer processes. Successful predictions of the performance of laser powered launch vehicle concepts depend on the sophisticate models that reflects the underlying flow physics including the laser ray tracing the focusing, inverse Bremsstrahlung (IB) effects, finite-rate air chemistry, thermal non-equilibrium, plasma radiation and detonation wave propagation, etc. The proposed work will extend the base-line numerical model to an efficient design analysis tool. The proposed model is suitable for 3-D analysis using parallel computing methods.

  20. Lunar Pole Illumination and Communications Statistics Computed from GSSR Elevation Data

    NASA Technical Reports Server (NTRS)

    Bryant, Scott

    2010-01-01

    The Goldstone Solar System RADAR (GSSR) group at JPL produced a Digital Elevation Model (DEM) of the lunar south pole using data obtained in 2006. This model has 40-meter horizontal resolution and about 5-meter relative vertical accuracy. This paper uses that Digital Elevation Model to compute average solar illumination and Earth visibility near the lunar south pole. This data quantifies solar power and Earth communications resources at proposed lunar base locations. The elevation data were converted into local terrain horizon masks, then converted into selenographic latitude and longitude coordinates. The horizon masks were compared to latitude, longitude regions bounding the maximum Sun and Earth motions relative to the moon. Proposed lunar south pole base sites were examined in detail, with the best site showing multi-year averages of solar power availability of 92% and Direct-To-Earth (DTE) communication availability of about 50%. Results are compared with a theoretical model, and with actual sun and Earth visibility averaged over the years 2009 to 2028. Results for the lunar North pole were computed using the GSSR DEM of the lunar North pole produced in 1997. The paper also explores using a heliostat to reduce the photovoltaic power system mass and complexity.

  1. FHSA-SED: Two-Locus Model Detection for Genome-Wide Association Study with Harmony Search Algorithm.

    PubMed

    Tuo, Shouheng; Zhang, Junying; Yuan, Xiguo; Zhang, Yuanyuan; Liu, Zhaowen

    2016-01-01

    Two-locus model is a typical significant disease model to be identified in genome-wide association study (GWAS). Due to intensive computational burden and diversity of disease models, existing methods have drawbacks on low detection power, high computation cost, and preference for some types of disease models. In this study, two scoring functions (Bayesian network based K2-score and Gini-score) are used for characterizing two SNP locus as a candidate model, the two criteria are adopted simultaneously for improving identification power and tackling the preference problem to disease models. Harmony search algorithm (HSA) is improved for quickly finding the most likely candidate models among all two-locus models, in which a local search algorithm with two-dimensional tabu table is presented to avoid repeatedly evaluating some disease models that have strong marginal effect. Finally G-test statistic is used to further test the candidate models. We investigate our method named FHSA-SED on 82 simulated datasets and a real AMD dataset, and compare it with two typical methods (MACOED and CSE) which have been developed recently based on swarm intelligent search algorithm. The results of simulation experiments indicate that our method outperforms the two compared algorithms in terms of detection power, computation time, evaluation times, sensitivity (TPR), specificity (SPC), positive predictive value (PPV) and accuracy (ACC). Our method has identified two SNPs (rs3775652 and rs10511467) that may be also associated with disease in AMD dataset.

  2. FHSA-SED: Two-Locus Model Detection for Genome-Wide Association Study with Harmony Search Algorithm

    PubMed Central

    Tuo, Shouheng; Zhang, Junying; Yuan, Xiguo; Zhang, Yuanyuan; Liu, Zhaowen

    2016-01-01

    Motivation Two-locus model is a typical significant disease model to be identified in genome-wide association study (GWAS). Due to intensive computational burden and diversity of disease models, existing methods have drawbacks on low detection power, high computation cost, and preference for some types of disease models. Method In this study, two scoring functions (Bayesian network based K2-score and Gini-score) are used for characterizing two SNP locus as a candidate model, the two criteria are adopted simultaneously for improving identification power and tackling the preference problem to disease models. Harmony search algorithm (HSA) is improved for quickly finding the most likely candidate models among all two-locus models, in which a local search algorithm with two-dimensional tabu table is presented to avoid repeatedly evaluating some disease models that have strong marginal effect. Finally G-test statistic is used to further test the candidate models. Results We investigate our method named FHSA-SED on 82 simulated datasets and a real AMD dataset, and compare it with two typical methods (MACOED and CSE) which have been developed recently based on swarm intelligent search algorithm. The results of simulation experiments indicate that our method outperforms the two compared algorithms in terms of detection power, computation time, evaluation times, sensitivity (TPR), specificity (SPC), positive predictive value (PPV) and accuracy (ACC). Our method has identified two SNPs (rs3775652 and rs10511467) that may be also associated with disease in AMD dataset. PMID:27014873

  3. Establishing a Cloud Computing Success Model for Hospitals in Taiwan.

    PubMed

    Lian, Jiunn-Woei

    2017-01-01

    The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services.

  4. Establishing a Cloud Computing Success Model for Hospitals in Taiwan

    PubMed Central

    Lian, Jiunn-Woei

    2017-01-01

    The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services. PMID:28112020

  5. Computerized power supply analysis: State equation generation and terminal models

    NASA Technical Reports Server (NTRS)

    Garrett, S. J.

    1978-01-01

    To aid engineers that design power supply systems two analysis tools that can be used with the state equation analysis package were developed. These tools include integration routines that start with the description of a power supply in state equation form and yield analytical results. The first tool uses a computer program that works with the SUPER SCEPTRE circuit analysis program and prints the state equation for an electrical network. The state equations developed automatically by the computer program are used to develop an algorithm for reducing the number of state variables required to describe an electrical network. In this way a second tool is obtained in which the order of the network is reduced and a simpler terminal model is obtained.

  6. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  7. On the usage of ultrasound computational models for decision making under ambiguity

    NASA Astrophysics Data System (ADS)

    Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron

    2018-04-01

    Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.

  8. A Computer Model for Teaching the Dynamic Behavior of AC Contactors

    ERIC Educational Resources Information Center

    Ruiz, J.-R. R.; Espinosa, A. G.; Romeral, L.

    2010-01-01

    Ac-powered contactors are extensively used in industry in applications such as automatic electrical devices, motor starters, and heaters. In this work, a practical session that allows students to model and simulate the dynamic behavior of ac-powered electromechanical contactors is presented. Simulation is carried out using a rigorous parametric…

  9. Animation of finite element models and results

    NASA Technical Reports Server (NTRS)

    Lipman, Robert R.

    1992-01-01

    This is not intended as a complete review of computer hardware and software that can be used for animation of finite element models and results, but is instead a demonstration of the benefits of visualization using selected hardware and software. The role of raw computational power, graphics speed, and the use of videotape are discussed.

  10. Optimization of refractive liquid crystal lenses using an efficient multigrid simulation.

    PubMed

    Milton, Harry; Brimicombe, Paul; Morgan, Philip; Gleeson, Helen; Clamp, John

    2012-05-07

    A multigrid computational model has been developed to assess the performance of refractive liquid crystal lenses, which is up to 40 times faster than previous techniques. Using this model, the optimum geometries producing an ideal parabolic voltage distribution were deduced for refractive liquid crystal lenses with diameters from 1 to 9 mm. The ratio of insulation thickness to lens diameter was determined to be 1:2 for small diameter lenses, tending to 1:3 for larger lenses. The model is used to propose a new method of lens operation with lower operating voltages needed to induce specific optical powers. The operating voltages are calculated for the induction of optical powers between + 1.00 D and + 3.00 D in a 3 mm diameter lens, with the speed of the simulation facilitating the optimization of the refractive index profile. We demonstrate that the relationship between additional applied voltage and optical power is approximately linear for optical powers under + 3.00 D. The versatility of the computational simulation has also been demonstrated by modeling of in-plane electrode liquid crystal devices.

  11. Use of Transition Modeling to Enable the Computation of Losses for Variable-Speed Power Turbine

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2012-01-01

    To investigate the penalties associated with using a variable speed power turbine (VSPT) in a rotorcraft capable of vertical takeoff and landing, various analysis tools are required. Such analysis tools must be able to model the flow accurately within the operating envelope of VSPT. For power turbines low Reynolds numbers and a wide range of the incidence angles, positive and negative, due to the variation in the shaft speed at relatively fixed corrected flows, characterize this envelope. The flow in the turbine passage is expected to be transitional and separated at high incidence. The turbulence model of Walters and Leylek was implemented in the NASA Glenn-HT code to enable a more accurate analysis of such flows. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Heat transfer computations were performed because it is a good marker for transition. The final goal is to be able to compute the aerodynamic losses. Armed with the new transition model, total pressure losses for three-dimensional flow of an Energy Efficient Engine (E3) tip section cascade for a range of incidence angles were computed in anticipation of the experimental data. The results obtained form a loss bucket for the chosen blade.

  12. Grid Integration Research | Wind | NREL

    Science.gov Websites

    -generated simulation of a wind turbine. Wind Power Plant Modeling and Simulation Engineers at the National computer-aided engineering tool, FAST, as well as their wind power plant simulation tool, Wind-Plant

  13. Development of spectral analysis math models and software program and spectral analyzer, digital converter interface equipment design

    NASA Technical Reports Server (NTRS)

    Hayden, W. L.; Robinson, L. H.

    1972-01-01

    Spectral analyses of angle-modulated communication systems is studied by: (1) performing a literature survey of candidate power spectrum computational techniques, determining the computational requirements, and formulating a mathematical model satisfying these requirements; (2) implementing the model on UNIVAC 1230 digital computer as the Spectral Analysis Program (SAP); and (3) developing the hardware specifications for a data acquisition system which will acquire an input modulating signal for SAP. The SAP computational technique uses extended fast Fourier transform and represents a generalized approach for simple and complex modulating signals.

  14. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  15. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  16. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads.

    PubMed

    Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-05-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.

  17. Optimal Load Shedding and Generation Rescheduling for Overload Suppression in Large Power Systems.

    NASA Astrophysics Data System (ADS)

    Moon, Young-Hyun

    Ever-increasing size, complexity and operation costs in modern power systems have stimulated the intensive study of an optimal Load Shedding and Generator Rescheduling (LSGR) strategy in the sense of a secure and economic system operation. The conventional approach to LSGR has been based on the application of LP (Linear Programming) with the use of an approximately linearized model, and the LP algorithm is currently considered to be the most powerful tool for solving the LSGR problem. However, all of the LP algorithms presented in the literature essentially lead to the following disadvantages: (i) piecewise linearization involved in the LP algorithms requires the introduction of a number of new inequalities and slack variables, which creates significant burden to the computing facilities, and (ii) objective functions are not formulated in terms of the state variables of the adopted models, resulting in considerable numerical inefficiency in the process of computing the optimal solution. A new approach is presented, based on the development of a new linearized model and on the application of QP (Quadratic Programming). The changes in line flows as a result of changes to bus injection power are taken into account in the proposed model by the introduction of sensitivity coefficients, which avoids the mentioned second disadvantages. A precise method to calculate these sensitivity coefficients is given. A comprehensive review of the theory of optimization is included, in which results of the development of QP algorithms for LSGR as based on Wolfe's method and Kuhn -Tucker theory are evaluated in detail. The validity of the proposed model and QP algorithms has been verified and tested on practical power systems, showing the significant reduction of both computation time and memory requirements as well as the expected lower generation costs of the optimal solution as compared with those obtained from computing the optimal solution with LP. Finally, it is noted that an efficient reactive power compensation algorithm is developed to suppress voltage disturbances due to load sheddings, and that a new method for multiple contingency simulation is presented.

  18. Approximate Bayesian Computation Using Markov Chain Monte Carlo Simulation: Theory, Concepts, and Applications

    NASA Astrophysics Data System (ADS)

    Sadegh, M.; Vrugt, J. A.

    2013-12-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at increasingly finer spatial and temporal scales. Reconciling these system models with field and remote sensing data is a difficult task, particularly because average measures of model/data similarity inherently lack the power to provide a meaningful comparative evaluation of the consistency in model form and function. The very construction of the likelihood function - as a summary variable of the (usually averaged) properties of the error residuals - dilutes and mixes the available information into an index having little remaining correspondence to specific behaviors of the system (Gupta et al., 2008). The quest for a more powerful method for model evaluation has inspired Vrugt and Sadegh [2013] to introduce "likelihood-free" inference as vehicle for diagnostic model evaluation. This class of methods is also referred to as Approximate Bayesian Computation (ABC) and relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a much stronger and compelling diagnostic power than some aggregated measure of the size of the error residuals. Here, we will introduce an efficient ABC sampling method that is orders of magnitude faster in exploring the posterior parameter distribution than commonly used rejection and Population Monte Carlo (PMC) samplers. Our methodology uses Markov Chain Monte Carlo simulation with DREAM, and takes advantage of a simple computational trick to resolve discontinuity problems with the application of set-theoretic summary statistics. We will also demonstrate a set of summary statistics that are rather insensitive to errors in the forcing data. This enhances prospects of detecting model structural deficiencies.

  19. Modeling Mendel's Laws on Inheritance in Computational Biology and Medical Sciences

    ERIC Educational Resources Information Center

    Singh, Gurmukh; Siddiqui, Khalid; Singh, Mankiran; Singh, Satpal

    2011-01-01

    The current research article is based on a simple and practical way of employing the computational power of widely available, versatile software MS Excel 2007 to perform interactive computer simulations for undergraduate/graduate students in biology, biochemistry, biophysics, microbiology, medicine in college and university classroom setting. To…

  20. Thermal and optical performance of encapsulation systems for flat-plate photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Minning, C. P.; Coakley, J. F.; Perrygo, C. M.; Garcia, A., III; Cuddihy, E. F.

    1981-01-01

    The electrical power output from a photovoltaic module is strongly influenced by the thermal and optical characteristics of the module encapsulation system. Described are the methodology and computer model for performing fast and accurate thermal and optical evaluations of different encapsulation systems. The computer model is used to evaluate cell temperature, solar energy transmittance through the encapsulation system, and electric power output for operation in a terrestrial environment. Extensive results are presented for both superstrate-module and substrate-module design schemes which include different types of silicon cell materials, pottants, and antireflection coatings.

  1. Design analysis and computer-aided performance evaluation of shuttle orbiter electrical power system. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Studies were conducted to develop appropriate space shuttle electrical power distribution and control (EPDC) subsystem simulation models and to apply the computer simulations to systems analysis of the EPDC. A previously developed software program (SYSTID) was adapted for this purpose. The following objectives were attained: (1) significant enhancement of the SYSTID time domain simulation software, (2) generation of functionally useful shuttle EPDC element models, and (3) illustrative simulation results in the analysis of EPDC performance, under the conditions of fault, current pulse injection due to lightning, and circuit protection sizing and reaction times.

  2. Modeling Creep-Fatigue-Environment Interactions in Steam Turbine Rotor Materials for Advanced Ultra-supercritical Coal Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Chen

    2014-04-01

    The goal of this project is to model creep-fatigue-environment interactions in steam turbine rotor materials for advanced ultra-supercritical (A-USC) coal power Alloy 282 plants, to develop and demonstrate computational algorithms for alloy property predictions, and to determine and model key mechanisms that contribute to the damages caused by creep-fatigue-environment interactions.

  3. Power system observability and dynamic state estimation for stability monitoring using synchrophasor measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Kai; Qi, Junjian; Kang, Wei

    2016-08-01

    Growing penetration of intermittent resources such as renewable generations increases the risk of instability in a power grid. This paper introduces the concept of observability and its computational algorithms for a power grid monitored by the wide-area measurement system (WAMS) based on synchrophasors, e.g. phasor measurement units (PMUs). The goal is to estimate real-time states of generators, especially for potentially unstable trajectories, the information that is critical for the detection of rotor angle instability of the grid. The paper studies the number and siting of synchrophasors in a power grid so that the state of the system can be accuratelymore » estimated in the presence of instability. An unscented Kalman filter (UKF) is adopted as a tool to estimate the dynamic states that are not directly measured by synchrophasors. The theory and its computational algorithms are illustrated in detail by using a 9-bus 3-generator power system model and then tested on a 140-bus 48-generator Northeast Power Coordinating Council power grid model. Case studies on those two systems demonstrate the performance of the proposed approach using a limited number of synchrophasors for dynamic state estimation for stability assessment and its robustness against moderate inaccuracies in model parameters.« less

  4. Climate Ocean Modeling on a Beowulf Class System

    NASA Technical Reports Server (NTRS)

    Cheng, B. N.; Chao, Y.; Wang, P.; Bondarenko, M.

    2000-01-01

    With the growing power and shrinking cost of personal computers. the availability of fast ethernet interconnections, and public domain software packages, it is now possible to combine them to build desktop parallel computers (named Beowulf or PC clusters) at a fraction of what it would cost to buy systems of comparable power front supercomputer companies. This led as to build and assemble our own sys tem. specifically for climate ocean modeling. In this article, we present our experience with such a system, discuss its network performance, and provide some performance comparison data with both HP SPP2000 and Cray T3E for an ocean Model used in present-day oceanographic research.

  5. Implementation and Testing of Turbulence Models for the F18-HARV Simulation

    NASA Technical Reports Server (NTRS)

    Yeager, Jessie C.

    1998-01-01

    This report presents three methods of implementing the Dryden power spectral density model for atmospheric turbulence. Included are the equations which define the three methods and computer source code written in Advanced Continuous Simulation Language to implement the equations. Time-history plots and sample statistics of simulated turbulence results from executing the code in a test program are also presented. Power spectral densities were computed for sample sequences of turbulence and are plotted for comparison with the Dryden spectra. The three model implementations were installed in a nonlinear six-degree-of-freedom simulation of the High Alpha Research Vehicle airplane. Aircraft simulation responses to turbulence generated with the three implementations are presented as plots.

  6. The emerging role of cloud computing in molecular modelling.

    PubMed

    Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W

    2013-07-01

    There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. The Designing of CALM (Computer Anxiety and Learning Measure): Validation of a Multidimensional Measure of Anxiety and Cognitions Relating to Adult Learning of Computing Skills Using Structural Equation Modeling.

    ERIC Educational Resources Information Center

    McInerney, Valentina; Marsh, Herbert W.; McInerney, Dennis M.

    This paper discusses the process through which a powerful multidimensional measure of affect and cognition in relation to adult learning of computing skills was derived from its early theoretical stages to its validation using structural equation modeling. The discussion emphasizes the importance of ensuring a strong substantive base from which to…

  8. Modeling and Analysis of Power Processing Systems (MAPPS). Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Radman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.

    1980-01-01

    The computer programs and derivations generated in support of the modeling and design optimization program are presented. Programs for the buck regulator, boost regulator, and buck-boost regulator are described. The computer program for the design optimization calculations is presented. Constraints for the boost and buck-boost converter were derived. Derivations of state-space equations and transfer functions are presented. Computer lists for the converters are presented, and the input parameters justified.

  9. Experimental investigations, modeling, and analyses of high-temperature devices for space applications: Part 1. Final report, June 1996--December 1998

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tournier, J.; El-Genk, M.S.; Huang, L.

    1999-01-01

    The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less

  10. Experimental investigations, modeling, and analyses of high-temperature devices for space applications: Part 2. Final report, June 1996--December 1998

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tournier, J.; El-Genk, M.S.; Huang, L.

    1999-01-01

    The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less

  11. Computational Modeling for Language Acquisition: A Tutorial With Syntactic Islands.

    PubMed

    Pearl, Lisa S; Sprouse, Jon

    2015-06-01

    Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. We provide a general overview of why modeling can be a particularly informative tool and some general considerations when creating a computational acquisition model. We then review a concrete example of a computational acquisition model for complex structural knowledge referred to as syntactic islands. This includes an overview of syntactic islands knowledge, a precise definition of the acquisition task being modeled, the modeling results, and how to meaningfully interpret those results in a way that is relevant for questions about knowledge representation and the learning process. Computational modeling is a powerful tool that can be used to understand linguistic development. The general approach presented here can be used to investigate any acquisition task and any learning strategy, provided both are precisely defined.

  12. Computer Simulation of an Electric Trolley Bus

    DOT National Transportation Integrated Search

    1979-12-01

    This report describes a computer model developed at the Transportation Systems Center (TSC) to simulate power/propulsion characteristics of an urban trolley bus. The work conducted in this area is sponsored by the Urban Mass Transportation Administra...

  13. Performance Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis with Different Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel

    Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less

  14. An automatic step adjustment method for average power analysis technique used in fiber amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Ming

    2006-04-01

    An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.

  15. Variable Generation Power Forecasting as a Big Data Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haupt, Sue Ellen; Kosovic, Branko

    To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less

  16. Variable Generation Power Forecasting as a Big Data Problem

    DOE PAGES

    Haupt, Sue Ellen; Kosovic, Branko

    2016-10-10

    To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less

  17. Higher-order ice-sheet modelling accelerated by multigrid on graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian; Egholm, David

    2013-04-01

    Higher-order ice flow modelling is a very computer intensive process owing primarily to the nonlinear influence of the horizontal stress coupling. When applied for simulating long-term glacial landscape evolution, the ice-sheet models must consider very long time series, while both high temporal and spatial resolution is needed to resolve small effects. The use of higher-order and full stokes models have therefore seen very limited usage in this field. However, recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large-scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists working on ice flow models. Our current research focuses on utilising the GPU as a tool in ice-sheet and glacier modelling. To this extent we have implemented the Integrated Second-Order Shallow Ice Approximation (iSOSIA) equations on the device using the finite difference method. To accelerate the computations, the GPU solver uses a non-linear Red-Black Gauss-Seidel iterator coupled with a Full Approximation Scheme (FAS) multigrid setup to further aid convergence. The GPU finite difference implementation provides the inherent parallelization that scales from hundreds to several thousands of cores on newer cards. We demonstrate the efficiency of the GPU multigrid solver using benchmark experiments.

  18. Nonlinear power flow feedback control for improved stability and performance of airfoil sections

    DOEpatents

    Wilson, David G.; Robinett, III, Rush D.

    2013-09-03

    A computer-implemented method of determining the pitch stability of an airfoil system, comprising using a computer to numerically integrate a differential equation of motion that includes terms describing PID controller action. In one model, the differential equation characterizes the time-dependent response of the airfoil's pitch angle, .alpha.. The computer model calculates limit-cycles of the model, which represent the stability boundaries of the airfoil system. Once the stability boundary is known, feedback control can be implemented, by using, for example, a PID controller to control a feedback actuator. The method allows the PID controller gain constants, K.sub.I, K.sub.p, and K.sub.d, to be optimized. This permits operation closer to the stability boundaries, while preventing the physical apparatus from unintentionally crossing the stability boundaries. Operating closer to the stability boundaries permits greater power efficiencies to be extracted from the airfoil system.

  19. Unified Performance and Power Modeling of Scientific Workloads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Shuaiwen; Barker, Kevin J.; Kerbyson, Darren J.

    2013-11-17

    It is expected that scientific applications executing on future large-scale HPC must be optimized not only in terms of performance, but also in terms of power consumption. As power and energy become increasingly constrained resources, researchers and developers must have access to tools that will allow for accurate prediction of both performance and power consumption. Reasoning about performance and power consumption in concert will be critical for achieving maximum utilization of limited resources on future HPC systems. To this end, we present a unified performance and power model for the Nek-Bone mini-application developed as part of the DOE's CESAR Exascalemore » Co-Design Center. Our models consider the impact of computation, point-to-point communication, and collective communication« less

  20. Heat transfer, thermal stress analysis and the dynamic behaviour of high power RF structures. [MARC and SUPERFISH codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKeown, J.; Labrie, J.P.

    1983-08-01

    A general purpose finite element computer code called MARC is used to calculate the temperature distribution and dimensional changes in linear accelerator rf structures. Both steady state and transient behaviour are examined with the computer model. Combining results from MARC with the cavity evaluation computer code SUPERFISH, the static and dynamic behaviour of a structure under power is investigated. Structure cooling is studied to minimize loss in shunt impedance and frequency shifts during high power operation. Results are compared with an experimental test carried out on a cw 805 MHz on-axis coupled structure at an energy gradient of 1.8 MeV/m.more » The model has also been used to compare the performance of on-axis and coaxial structures and has guided the mechanical design of structures suitable for average gradients in excess of 2.0 MeV/m at 2.45 GHz.« less

  1. Energy and time determine scaling in biological and computer designs

    PubMed Central

    Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie

    2016-01-01

    Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy–time minimization principle may govern the design of many complex systems that process energy, materials and information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431524

  2. Energy and time determine scaling in biological and computer designs.

    PubMed

    Moses, Melanie; Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie

    2016-08-19

    Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy-time minimization principle may govern the design of many complex systems that process energy, materials and information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).

  3. Formulation of advanced consumables management models: Executive summary. [modeling spacecraft environmental control, life support, and electric power supply systems

    NASA Technical Reports Server (NTRS)

    Daly, J. K.; Torian, J. G.

    1979-01-01

    An overview of studies conducted to establish the requirements for advanced subsystem analytical tools is presented. Modifications are defined for updating current computer programs used to analyze environmental control, life support, and electric power supply systems so that consumables for future advanced spacecraft may be managed.

  4. Network model and short circuit program for the Kennedy Space Center electric power distribution system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Assumptions made and techniques used in modeling the power network to the 480 volt level are discussed. Basic computational techniques used in the short circuit program are described along with a flow diagram of the program and operational procedures. Procedures for incorporating network changes are included in this user's manual.

  5. Development of a Modular, Provider Customized Airway Trainer

    DTIC Science & Technology

    2015-11-25

    Instructions for Airway Model with sensors and computer ( Raspberry PI ) ........................................ 31 Appendix B: Instructions for...Appendix A: Instructions for Airway Model with sensors and computer ( Raspberry PI ) RASPBERRY PI INSTRUCTIONS 1. Connect multicolor sensor...cable and two blue sensor cables (blue sensor cable orientation does not matter) 2. Plug in power to the screen and raspberry pi ( two separate

  6. Study of electrode slice forming of bicycle dynamo hub power connector

    NASA Astrophysics Data System (ADS)

    Chen, Dyi-Cheng; Jao, Chih-Hsuan

    2013-12-01

    Taiwan's bicycle industry has been an international reputation as bicycle kingdom, but the problem in the world makes global warming green energy rise, the development of electrode slice of hub dynamo and power output connector to bring new hope to bike industry. In this study connector power output to gather public opinion related to patent, basis of collected documents as basis for design, structural components in least drawn to power output with simple connector. Power output of this study objectives connector hope at least cost, structure strongest, highest efficiency in output performance characteristics such as use of computer-aided drawing software Solid works to establish power output connector parts of 3D model, the overall portfolio should be considered part types including assembly ideas, weather resistance, water resistance, corrosion resistance to vibration and power flow stability. Moreover the 3D model import computer-aided finite element analysis software simulation of expected the power output of the connector parts manufacturing process. A series of simulation analyses, in which the variables relied on first stage and second stage forming, were run to examine the effective stress, effective strain, press speed, and die radial load distribution when forming electrode slice of bicycle dynamo hub.

  7. Raster-Based Approach to Solar Pressure Modeling

    NASA Technical Reports Server (NTRS)

    Wright, Theodore W. II

    2013-01-01

    An algorithm has been developed to take advantage of the graphics processing hardware in modern computers to efficiently compute high-fidelity solar pressure forces and torques on spacecraft, taking into account the possibility of self-shading due to the articulation of spacecraft components such as solar arrays. The process is easily extended to compute other results that depend on three-dimensional attitude analysis, such as solar array power generation or free molecular flow drag. The impact of photons upon a spacecraft introduces small forces and moments. The magnitude and direction of the forces depend on the material properties of the spacecraft components being illuminated. The parts of the components being lit depends on the orientation of the craft with respect to the Sun, as well as the gimbal angles for any significant moving external parts (solar arrays, typically). Some components may shield others from the Sun. The purpose of this innovation is to enable high-fidelity computation of solar pressure and power generation effects of illuminated portions of spacecraft, taking self-shading from spacecraft attitude and movable components into account. The key idea in this innovation is to compute results dependent upon complicated geometry by using an image to break the problem into thousands or millions of sub-problems with simple geometry, and then the results from the simpler problems are combined to give high-fidelity results for the full geometry. This process is performed by constructing a 3D model of a spacecraft using an appropriate computer language (OpenGL), and running that model on a modern computer's 3D accelerated video processor. This quickly and accurately generates a view of the model (as shown on a computer screen) that takes rotation and articulation of spacecraft components into account. When this view is interpreted as the spacecraft as seen by the Sun, then only the portions of the craft visible in the view are illuminated. The view as shown on the computer screen is composed of up to millions of pixels. Each of those pixels is associated with a small illuminated area of the spacecraft. For each pixel, it is possible to compute its position, angle (surface normal) from the view direction, and the spacecraft material (and therefore, optical coefficients) associated with that area. With this information, the area associated with each pixel can be modeled as a simple flat plate for calculating solar pressure. The vector sum of these individual flat plate models is a high-fidelity approximation of the solar pressure forces and torques on the whole vehicle. In addition to using optical coefficients associated with each spacecraft material to calculate solar pressure, a power generation coefficient is added for computing solar array power generation from the sum of the illuminated areas. Similarly, other area-based calculations, such as free molecular flow drag, are also enabled. Because the model rendering is separated from other calculations, it is relatively easy to add a new model to explore a new vehicle or mission configuration. Adding a new model is performed by adding OpenGL code, but a future version might read a mesh file exported from a computer-aided design (CAD) system to enable very rapid turnaround for new designs

  8. Modeling Large Scale Circuits Using Massively Parallel Descrete-Event Simulation

    DTIC Science & Technology

    2013-06-01

    exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power consumption...grow to exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power...Warp Speed 10.0. 2.0 INTRODUCTION As supercomputer systems approach exascale , the core count will exceed 1024 and number of transistors used in

  9. Impact of computational structure-based methods on drug discovery.

    PubMed

    Reynolds, Charles H

    2014-01-01

    Structure-based drug design has become an indispensible tool in drug discovery. The emergence of structure-based design is due to gains in structural biology that have provided exponential growth in the number of protein crystal structures, new computational algorithms and approaches for modeling protein-ligand interactions, and the tremendous growth of raw computer power in the last 30 years. Computer modeling and simulation have made major contributions to the discovery of many groundbreaking drugs in recent years. Examples are presented that highlight the evolution of computational structure-based design methodology, and the impact of that methodology on drug discovery.

  10. DISE Summary Report (1992)

    DTIC Science & Technology

    1994-03-01

    Specification and Network Time Protocol(NTP) over the Implementation. RFC-o 119, Network OSI Remote Operations Service. RFC- Working Group, September...approximately ISIS implements a powerful model of 94% of the total computation time. distributed computation known as modelo Timing results are

  11. Quantum computation with coherent spin states and the close Hadamard problem

    NASA Astrophysics Data System (ADS)

    Adcock, Mark R. A.; Høyer, Peter; Sanders, Barry C.

    2016-04-01

    We study a model of quantum computation based on the continuously parameterized yet finite-dimensional Hilbert space of a spin system. We explore the computational powers of this model by analyzing a pilot problem we refer to as the close Hadamard problem. We prove that the close Hadamard problem can be solved in the spin system model with arbitrarily small error probability in a constant number of oracle queries. We conclude that this model of quantum computation is suitable for solving certain types of problems. The model is effective for problems where symmetries between the structure of the information associated with the problem and the structure of the unitary operators employed in the quantum algorithm can be exploited.

  12. The electromagnetic modeling of thin apertures using the finite-difference time-domain technique

    NASA Technical Reports Server (NTRS)

    Demarest, Kenneth R.

    1987-01-01

    A technique which computes transient electromagnetic responses of narrow apertures in complex conducting scatterers was implemented as an extension of previously developed Finite-Difference Time-Domain (FDTD) computer codes. Although these apertures are narrow with respect to the wavelengths contained within the power spectrum of excitation, this technique does not require significantly more computer resources to attain the increased resolution at the apertures. In the report, an analytical technique which utilizes Babinet's principle to model the apertures is developed, and an FDTD computer code which utilizes this technique is described.

  13. Comparing Virtual and Physical Robotics Environments for Supporting Complex Systems and Computational Thinking

    ERIC Educational Resources Information Center

    Berland, Matthew; Wilensky, Uri

    2015-01-01

    Both complex systems methods (such as agent-based modeling) and computational methods (such as programming) provide powerful ways for students to understand new phenomena. To understand how to effectively teach complex systems and computational content to younger students, we conducted a study in four urban middle school classrooms comparing…

  14. Adapting Instruction to Individual Learner Differences: A Research Paradigm for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Mills, Steven C.; Ragan, Tillman J.

    This paper examines a research paradigm that is particularly suited to experimentation-related computer-based instruction and integrated learning systems. The main assumption of the model is that one of the most powerful capabilities of computer-based instruction, and specifically of integrated learning systems, is the capacity to adapt…

  15. Manipulatives and the Computer: A Powerful Partnership for Learners of All Ages.

    ERIC Educational Resources Information Center

    Perl, Teri

    1990-01-01

    Discussed is the concept of mirroring in which computer programs are used to enhance the use of mathematics manipulatives. The strengths and weaknesses of this approach are presented. The uses of the computer in modeling and as a manipulative are also described. Several software packages are suggested. (CW)

  16. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  17. Virtual welding equipment for simulation of GMAW processes with integration of power source regulation

    NASA Astrophysics Data System (ADS)

    Reisgen, Uwe; Schleser, Markus; Mokrov, Oleg; Zabirov, Alexander

    2011-06-01

    A two dimensional transient numerical analysis and computational module for simulation of electrical and thermal characteristics during electrode melting and metal transfer involved in Gas-Metal-Arc-Welding (GMAW) processes is presented. Solution of non-linear transient heat transfer equation is carried out using a control volume finite difference technique. The computational module also includes controlling and regulation algorithms of industrial welding power sources. The simulation results are the current and voltage waveforms, mean voltage drops at different parts of circuit, total electric power, cathode, anode and arc powers and arc length. We describe application of the model for normal process (constant voltage) and for pulsed processes with U/I and I/I-modulation modes. The comparisons with experimental waveforms of current and voltage show that the model predicts current, voltage and electric power with a high accuracy. The model is used in simulation package SimWeld for calculation of heat flux into the work-piece and the weld seam formation. From the calculated heat flux and weld pool sizes, an equivalent volumetric heat source according to Goldak model, can be generated. The method was implemented and investigated with the simulation software SimWeld developed by the ISF at RWTH Aachen University.

  18. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads

    PubMed Central

    Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-01-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922

  19. Two-locus diseas models with two marker loci: The power of affected-sib-pair tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knapp, M.; Seuchter, S.A.; Bauer, M.P.

    1994-11-01

    Recently, Schork et al. found that two-trait-locus, two-marker-locus (parametric) linkage analysis can provide substantially more linkage information than can standard one-trait-locus, one-marker-locus methods. However, because of the increased burden of computation, Schork et al. do not expect that their approach will be applied in an initial genome scan. Further, the specification of a suitable two-locus segregation model can be crucial. Affected-sib-pair tests are computationally simple and do not require an explicit specification of the disease model. In the past, however, these tests mainly have been applied to data with a single marker locus. Here, we consider sib-pair tests that makemore » it possible to analyze simultaneously two marker loci. The power of these tests is investigated for different (epistatic and heterogeneous) two-trait-locus models, each trait locus being linked to one of the marker loci. We compare these tests both with the test that is optimal for a certain model and with the strategy that analyzes each marker locus separately. The results indicate that a straightforward extension of the well-known mean test for two marker loci can be much more powerful than single-marker-locus analysis and that its power is only slightly inferior to the power of the optimal test. 21 refs., 5 figs., 2 tabs.« less

  20. Measurement and analysis of applied power, forces and material response in friction stir welding of aluminum alloy 6061

    NASA Astrophysics Data System (ADS)

    Avila, Ricardo E.

    The process of Friction Stir Welding (FSW) 6061 aluminum alloy is investigated, with focus on the forces and power being applied in the process and the material response. The main objective is to relate measurements of the forces and power applied in the process with mechanical properties of the material during the dynamic process, based on mathematical modeling and aided by computer simulations, using the LS-DYNA software for finite element modeling. Results of measurements of applied forces and power are presented. The result obtained for applied power is used in the construction of a mechanical variational model of FSW, in which minimization of a functional for the applied torque is sought, leading to an expression for shear stress in the material. The computer simulations are performed by application of the Smoothed Particle Hydrodynamics (SPH) method, in which no structured finite element mesh is used to construct a spatial discretization of the model. The current implementation of SPH in LS-DYNA allows a structural solution using a plastic kinematic material model. This work produces information useful to improve understanding of the material flow in the process, and thus adds to current knowledge about the behavior of materials under processes of severe plastic deformation, particularly those processes in which deformation occurs mainly by application of shear stress, aided by thermoplastic strain localization and dynamic recrystallization.

  1. A CFD-informed quasi-steady model of flapping wing aerodynamics.

    PubMed

    Nakata, Toshiyuki; Liu, Hao; Bomphrey, Richard J

    2015-11-01

    Aerodynamic performance and agility during flapping flight are determined by the combination of wing shape and kinematics. The degree of morphological and kinematic optimisation is unknown and depends upon a large parameter space. Aimed at providing an accurate and computationally inexpensive modelling tool for flapping-wing aerodynamics, we propose a novel CFD (computational fluid dynamics)-informed quasi-steady model (CIQSM), which assumes that the aerodynamic forces on a flapping wing can be decomposed into the quasi-steady forces and parameterised based on CFD results. Using least-squares fitting, we determine a set of proportional coefficients for the quasi-steady model relating wing kinematics to instantaneous aerodynamic force and torque; we calculate power with the product of quasi-steady torques and angular velocity. With the quasi-steady model fully and independently parameterised on the basis of high-fidelity CFD modelling, it is capable of predicting flapping-wing aerodynamic forces and power more accurately than the conventional blade element model (BEM) does. The improvement can be attributed to, for instance, taking into account the effects of the induced downwash and the wing tip vortex on the force generation and power consumption. Our model is validated by comparing the aerodynamics of a CFD model and the present quasi-steady model using the example case of a hovering hawkmoth. It demonstrates that the CIQSM outperforms the conventional BEM while remaining computationally cheap, and hence can be an effective tool for revealing the mechanisms of optimization and control of kinematics and morphology in flapping-wing flight for both bio-flyers and unmanned air systems.

  2. A CFD-informed quasi-steady model of flapping wing aerodynamics

    PubMed Central

    Nakata, Toshiyuki; Liu, Hao; Bomphrey, Richard J.

    2016-01-01

    Aerodynamic performance and agility during flapping flight are determined by the combination of wing shape and kinematics. The degree of morphological and kinematic optimisation is unknown and depends upon a large parameter space. Aimed at providing an accurate and computationally inexpensive modelling tool for flapping-wing aerodynamics, we propose a novel CFD (computational fluid dynamics)-informed quasi-steady model (CIQSM), which assumes that the aerodynamic forces on a flapping wing can be decomposed into the quasi-steady forces and parameterised based on CFD results. Using least-squares fitting, we determine a set of proportional coefficients for the quasi-steady model relating wing kinematics to instantaneous aerodynamic force and torque; we calculate power with the product of quasi-steady torques and angular velocity. With the quasi-steady model fully and independently parameterised on the basis of high-fidelity CFD modelling, it is capable of predicting flapping-wing aerodynamic forces and power more accurately than the conventional blade element model (BEM) does. The improvement can be attributed to, for instance, taking into account the effects of the induced downwash and the wing tip vortex on the force generation and power consumption. Our model is validated by comparing the aerodynamics of a CFD model and the present quasi-steady model using the example case of a hovering hawkmoth. It demonstrates that the CIQSM outperforms the conventional BEM while remaining computationally cheap, and hence can be an effective tool for revealing the mechanisms of optimization and control of kinematics and morphology in flapping-wing flight for both bio-flyers and unmanned air systems. PMID:27346891

  3. The LHCb software and computing upgrade for Run 3: opportunities and challenges

    NASA Astrophysics Data System (ADS)

    Bozzi, C.; Roiser, S.; LHCb Collaboration

    2017-10-01

    The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.

  4. Dynamic Average-Value Modeling of Doubly-Fed Induction Generator Wind Energy Conversion Systems

    NASA Astrophysics Data System (ADS)

    Shahab, Azin

    In a Doubly-fed Induction Generator (DFIG) wind energy conversion system, the rotor of a wound rotor induction generator is connected to the grid via a partial scale ac/ac power electronic converter which controls the rotor frequency and speed. In this research, detailed models of the DFIG wind energy conversion system with Sinusoidal Pulse-Width Modulation (SPWM) scheme and Optimal Pulse-Width Modulation (OPWM) scheme for the power electronic converter are developed in detail in PSCAD/EMTDC. As the computer simulation using the detailed models tends to be computationally extensive, time consuming and even sometimes not practical in terms of speed, two modified approaches (switching-function modeling and average-value modeling) are proposed to reduce the simulation execution time. The results demonstrate that the two proposed approaches reduce the simulation execution time while the simulation results remain close to those obtained using the detailed model simulation.

  5. Comparison of Analytical Predictions and Experimental Results for a Dual Brayton Power System (Discussion on Test Hardware and Computer Model for a Dual Brayton System)

    NASA Technical Reports Server (NTRS)

    Johnson, Paul K.

    2007-01-01

    NASA Glenn Research Center (GRC) contracted Barber-Nichols, Arvada, CO to construct a dual Brayton power conversion system for use as a hardware proof of concept and to validate results from a computational code known as the Closed Cycle System Simulation (CCSS). Initial checkout tests were performed at Barber- Nichols to ready the system for delivery to GRC. This presentation describes the system hardware components and lists the types of checkout tests performed along with a couple issues encountered while conducting the tests. A description of the CCSS model is also presented. The checkout tests did not focus on generating data, therefore, no test data or model analyses are presented.

  6. Simulating spin models on GPU

    NASA Astrophysics Data System (ADS)

    Weigel, Martin

    2011-09-01

    Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating spin models, such as the Ising model, on GPU as compared to conventional simulations on CPU.

  7. Improve SSME power balance model

    NASA Technical Reports Server (NTRS)

    Karr, Gerald R.

    1992-01-01

    Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.

  8. Development of a thermal storage module using modified anhydrous sodium hydroxide

    NASA Technical Reports Server (NTRS)

    Rice, R. E.; Rowny, P. E.

    1980-01-01

    The laboratory scale testing of a modified anhydrous NaOH latent heat storage concept for small solar thermal power systems such as total energy systems utilizing organic Rankine systems is discussed. A diagnostic test on the thermal energy storage module and an investigation of alternative heat transfer fluids and heat exchange concepts are specifically addressed. A previously developed computer simulation model is modified to predict the performance of the module in a solar total energy system environment. In addition, the computer model is expanded to investigate parametrically the incorporation of a second heat exchange inside the module which will vaporize and superheat the Rankine cycle power fluid.

  9. Light extraction in planar light-emitting diode with nonuniform current injection: model and simulation.

    PubMed

    Khmyrova, Irina; Watanabe, Norikazu; Kholopova, Julia; Kovalchuk, Anatoly; Shapoval, Sergei

    2014-07-20

    We develop an analytical and numerical model for performing simulation of light extraction through the planar output interface of the light-emitting diodes (LEDs) with nonuniform current injection. Spatial nonuniformity of injected current is a peculiar feature of the LEDs in which top metal electrode is patterned as a mesh in order to enhance the output power of light extracted through the top surface. Basic features of the model are the bi-plane computation domain, related to other areas of numerical grid (NG) cells in these two planes, representation of light-generating layer by an ensemble of point light sources, numerical "collection" of light photons from the area limited by acceptance circle and adjustment of NG-cell areas in the computation procedure by the angle-tuned aperture function. The developed model and procedure are used to simulate spatial distributions of the output optical power as well as the total output power at different mesh pitches. The proposed model and simulation strategy can be very efficient in evaluation of the output optical performance of LEDs with periodical or symmetrical configuration of the electrodes.

  10. Equivalent Relaxations of Optimal Power Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, S; Low, SH; Teeraratkul, T

    2015-03-01

    Several convex relaxations of the optimal power flow (OPF) problem have recently been developed using both bus injection models and branch flow models. In this paper, we prove relations among three convex relaxations: a semidefinite relaxation that computes a full matrix, a chordal relaxation based on a chordal extension of the network graph, and a second-order cone relaxation that computes the smallest partial matrix. We prove a bijection between the feasible sets of the OPF in the bus injection model and the branch flow model, establishing the equivalence of these two models and their second-order cone relaxations. Our results implymore » that, for radial networks, all these relaxations are equivalent and one should always solve the second-order cone relaxation. For mesh networks, the semidefinite relaxation and the chordal relaxation are equally tight and both are strictly tighter than the second-order cone relaxation. Therefore, for mesh networks, one should either solve the chordal relaxation or the SOCP relaxation, trading off tightness and the required computational effort. Simulations are used to illustrate these results.« less

  11. Lunar Pole Illumination and Communications Maps Computed from GSSR Elevation Data

    NASA Technical Reports Server (NTRS)

    Bryant, Scott

    2009-01-01

    A Digital Elevation Model of the lunar south pole was produced using Goldstone Solar System RADAR (GSSR) data obtained in 2006.12 This model has 40-meter horizontal resolution and about 5-meter relative vertical accuracy. This Digital Elevation Model was used to compute average solar illumination and Earth visibility with 100 kilometers of the lunar south pole. The elevation data were converted into local terrain horizon masks, then converted into lunar-centric latitude and longitude coordinates. The horizon masks were compared to latitude, longitude regions bounding the maximum Sun and Earth motions relative to the moon. Estimates of Earth visibility were computed by integrating the area of the region bounding the Earth's motion that was below the horizon mask. Solar illumination and other metrics were computed similarly. Proposed lunar south pole base sites were examined in detail, with the best site showing yearly solar power availability of 92 percent and Direct-To-Earth (DTE) communication availability of about 50 percent. Similar analysis of the lunar south pole used an older GSSR Digital Elevation Model with 600-meter horizontal resolution. The paper also explores using a heliostat to reduce the photovoltaic power system mass and complexity.

  12. Cluster-based adaptive power control protocol using Hidden Markov Model for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Vinutha, C. B.; Nalini, N.; Nagaraja, M.

    2017-06-01

    This paper presents strategies for an efficient and dynamic transmission power control technique, in order to reduce packet drop and hence energy consumption of power-hungry sensor nodes operated in highly non-linear channel conditions of Wireless Sensor Networks. Besides, we also focus to prolong network lifetime and scalability by designing cluster-based network structure. Specifically we consider weight-based clustering approach wherein, minimum significant node is chosen as Cluster Head (CH) which is computed stemmed from the factors distance, remaining residual battery power and received signal strength (RSS). Further, transmission power control schemes to fit into dynamic channel conditions are meticulously implemented using Hidden Markov Model (HMM) where probability transition matrix is formulated based on the observed RSS measurements. Typically, CH estimates initial transmission power of its cluster members (CMs) from RSS using HMM and broadcast this value to its CMs for initialising their power value. Further, if CH finds that there are variations in link quality and RSS of the CMs, it again re-computes and optimises the transmission power level of the nodes using HMM to avoid packet loss due noise interference. We have demonstrated our simulation results to prove that our technique efficiently controls the power levels of sensing nodes to save significant quantity of energy for different sized network.

  13. Maximizing photovoltaic power generation of a space-dart configured satellite

    NASA Astrophysics Data System (ADS)

    Lee, Dae Young; Cutler, James W.; Mancewicz, Joe; Ridley, Aaron J.

    2015-06-01

    Many small satellites are power constrained due to their minimal solar panel area and the eclipse environment of low-Earth orbit. As with larger satellites, these small satellites, including CubeSats, use deployable power arrays to increase power production. This presents a design opportunity to develop various objective functions related to energy management and methods for optimizing these functions over a satellite design. A novel power generation model was created, and a simulation system was developed to evaluate various objective functions describing energy management for complex satellite designs. The model uses a spacecraft-body-fixed spherical coordinate system to analyze the complex geometry of a satellite's self-induced shadowing with computation provided by the Open Graphics Library. As an example design problem, a CubeSat configured as a space-dart with four deployable panels is optimized. Due to the fast computation speed of the solution, an exhaustive search over the design space is used to find the solar panel deployment angles which maximize total power generation. Simulation results are presented for a variety of orbit scenarios. The method is extendable to a variety of complex satellite geometries and power generation systems.

  14. Modular, Semantics-Based Composition of Biosimulation Models

    ERIC Educational Resources Information Center

    Neal, Maxwell Lewis

    2010-01-01

    Biosimulation models are valuable, versatile tools used for hypothesis generation and testing, codification of biological theory, education, and patient-specific modeling. Driven by recent advances in computational power and the accumulation of systems-level experimental data, modelers today are creating models with an unprecedented level of…

  15. Approach to Computer Implementation of Mathematical Model of 3-Phase Induction Motor

    NASA Astrophysics Data System (ADS)

    Pustovetov, M. Yu

    2018-03-01

    This article discusses the development of the computer model of an induction motor based on the mathematical model in a three-phase stator reference frame. It uses an approach that allows combining during preparation of the computer model dual methods: means of visual programming circuitry (in the form of electrical schematics) and logical one (in the form of block diagrams). The approach enables easy integration of the model of an induction motor as part of more complex models of electrical complexes and systems. The developed computer model gives the user access to the beginning and the end of a winding of each of the three phases of the stator and rotor. This property is particularly important when considering the asymmetric modes of operation or when powered by the special circuitry of semiconductor converters.

  16. Approximating lens power.

    PubMed

    Kaye, Stephen B

    2009-04-01

    To provide a scalar measure of refractive error, based on geometric lens power through principal, orthogonal and oblique meridians, that is not limited to the paraxial and sag height approximations. A function is derived to model sections through the principal meridian of a lens, followed by rotation of the section through orthogonal and oblique meridians. Average focal length is determined using the definition for the average of a function. Average univariate power in the principal meridian (including spherical aberration), can be computed from the average of a function over the angle of incidence as determined by the parameters of the given lens, or adequately computed from an integrated series function. Average power through orthogonal and oblique meridians, can be similarly determined using the derived formulae. The widely used computation for measuring refractive error, the spherical equivalent, introduces non-constant approximations, leading to a systematic bias. The equations proposed provide a good univariate representation of average lens power and are not subject to a systematic bias. They are particularly useful for the analysis of aggregate data, correlating with biological treatment variables and for developing analyses, which require a scalar equivalent representation of refractive power.

  17. Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources

    NASA Astrophysics Data System (ADS)

    Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi

    2017-01-01

    Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.

  18. AA9int: SNP Interaction Pattern Search Using Non-Hierarchical Additive Model Set.

    PubMed

    Lin, Hui-Yi; Huang, Po-Yu; Chen, Dung-Tsa; Tung, Heng-Yuan; Sellers, Thomas A; Pow-Sang, Julio; Eeles, Rosalind; Easton, Doug; Kote-Jarai, Zsofia; Amin Al Olama, Ali; Benlloch, Sara; Muir, Kenneth; Giles, Graham G; Wiklund, Fredrik; Gronberg, Henrik; Haiman, Christopher A; Schleutker, Johanna; Nordestgaard, Børge G; Travis, Ruth C; Hamdy, Freddie; Neal, David E; Pashayan, Nora; Khaw, Kay-Tee; Stanford, Janet L; Blot, William J; Thibodeau, Stephen N; Maier, Christiane; Kibel, Adam S; Cybulski, Cezary; Cannon-Albright, Lisa; Brenner, Hermann; Kaneva, Radka; Batra, Jyotsna; Teixeira, Manuel R; Pandha, Hardev; Lu, Yong-Jie; Park, Jong Y

    2018-06-07

    The use of single nucleotide polymorphism (SNP) interactions to predict complex diseases is getting more attention during the past decade, but related statistical methods are still immature. We previously proposed the SNP Interaction Pattern Identifier (SIPI) approach to evaluate 45 SNP interaction patterns/patterns. SIPI is statistically powerful but suffers from a large computation burden. For large-scale studies, it is necessary to use a powerful and computation-efficient method. The objective of this study is to develop an evidence-based mini-version of SIPI as the screening tool or solitary use and to evaluate the impact of inheritance mode and model structure on detecting SNP-SNP interactions. We tested two candidate approaches: the 'Five-Full' and 'AA9int' method. The Five-Full approach is composed of the five full interaction models considering three inheritance modes (additive, dominant and recessive). The AA9int approach is composed of nine interaction models by considering non-hierarchical model structure and the additive mode. Our simulation results show that AA9int has similar statistical power compared to SIPI and is superior to the Five-Full approach, and the impact of the non-hierarchical model structure is greater than that of the inheritance mode in detecting SNP-SNP interactions. In summary, it is recommended that AA9int is a powerful tool to be used either alone or as the screening stage of a two-stage approach (AA9int+SIPI) for detecting SNP-SNP interactions in large-scale studies. The 'AA9int' and 'parAA9int' functions (standard and parallel computing version) are added in the SIPI R package, which is freely available at https://linhuiyi.github.io/LinHY_Software/. hlin1@lsuhsc.edu. Supplementary data are available at Bioinformatics online.

  19. Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines

    NASA Astrophysics Data System (ADS)

    Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.

    2016-12-01

    Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.

  20. Methods and benefits of experimental seismic evaluation of nuclear power plants. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-07-01

    This study reviews experimental techniques, instrumentation requirements, safety considerations, and benefits of performing vibration tests on nuclear power plant containments and internal components. The emphasis is on testing to improve seismic structural models. Techniques for identification of resonant frequencies, damping, and mode shapes, are discussed. The benefits of testing with regard to increased damping and more accurate computer models are oulined. A test plan, schedule and budget are presented for a typical PWR nuclear power plant.

  1. Photovoltaic conversion of laser power to electrical power

    NASA Technical Reports Server (NTRS)

    Walker, G. H.; Heinbockel, J. H.

    1986-01-01

    Photovoltaic laser to electric converters are attractive for use with a space-based laser power station. This paper presents the results of modeling studies for a silicon vertical junction converter used with a Nd laser. A computer code was developed for the model and this code was used to conduct a parametric study for a Si vertical junction converter consisting of one p-n junction irradiated with a Nd laser. These calculations predict an efficiency over 50 percent for an optimized converter.

  2. Proceedings of Conference on Variable-Resolution Modeling, Washington, DC, 5-6 May 1992

    DTIC Science & Technology

    1992-05-01

    of powerful new computer architectures for supporting object-oriented computing. Objects, as self -contained data-code packages with orderly...another entity structure. For example, (copy-entstr e:sys- tcm ’ new -system) creates an entity structure named c:new-system that has the same structure...324 Parry, S-H. (1984): A Self -contained Hierarchical Model Construct. In: Systems Analysis and Modeling in Defense (R.K. Huber, Ed.), New York

  3. Computer-Aided Engineering for Electric-Drive Vehicle Batteries (CAEBAT) |

    Science.gov Websites

    Battery Cell under Quasi-Static Indentation Tests," J. Power Sources, Submitted. J. Marcicki -Ion Cell under Mechanical Abuse," J. Power Sources, 290, p. 102-113 (2015). http://dx.doi.org Model Order Reduction," J. Power Sources, 273(1), p.1226-1236 (2015). http://dx.doi.org/10.1016

  4. Simulation Accelerator

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Under a NASA SBIR (Small Business Innovative Research) contract, (NAS5-30905), EAI Simulation Associates, Inc., developed a new digital simulation computer, Starlight(tm). With an architecture based on the analog model of computation, Starlight(tm) outperforms all other computers on a wide range of continuous system simulation. This system is used in a variety of applications, including aerospace, automotive, electric power and chemical reactors.

  5. Bimolecular dynamics by computer analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eilbeck, J.C.; Lomdahl, P.S.; Scott, A.C.

    1984-01-01

    As numerical tools (computers and display equipment) become more powerful and the atomic structures of important biological molecules become known, the importance of detailed computation of nonequilibrium biomolecular dynamics increases. In this manuscript we report results from a well developed study of the hydrogen bonded polypeptide crystal acetanilide, a model protein. Directions for future research are suggested. 9 references, 6 figures.

  6. Hot Chips and Hot Interconnects for High End Computing Systems

    NASA Technical Reports Server (NTRS)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  7. Spectral variation of high power microwave pulse propagating in a self-generated plasma

    NASA Technical Reports Server (NTRS)

    Ren, A.; Kuo, S. P.; Kossey, Paul

    1995-01-01

    A systematic study to understand the spectral variation of a high power microwave pulse propagating in a self-generated plasma is carried out. It includes the theoretical formulation, experimental demonstration, and computer simulations and computer experiments. The experiment of pulse propagation is conducted in a vacuum chamber filled with dry air (approximately 0.2 torr); the chamber is made of a 2 ft. cube of Plexiglas. A rectangular microwave pulse (1 microsec pulse width and 3.27 GHz carrier frequency) is fed into the cube through an S band microwave horn placed at one side of the chamber. A second S-band horn placed at the opposite side of the chamber is used to receive the transmitted pulse. The spectra of the incident pulse and transmitted pulse are then compared. As the power of the incident pulse is only slightly (less than 15%) above the breakdown threshold power of the background air, the peak of the spectrum of the transmitted pulse is upshifted from the carrier frequency 3.27 GHz of the incident pulse. However, as the power of the incident pulse exceeds the breakdown threshold power of the background air by 30%, a different phenomenon appears. The spectrum of the transmitted pulse begins to have two peaks. One is upshifted and the other one downshifted from the single peak location of the incident pulse. The amount of frequency downshift is comparable to that of the upshifted frequency. A theoretical model describing the experiment of pulse propagation in a self-generated plasma is developed. There are excellent agreements between the experimental results and computer simulations based on this theoretical model, which is also used to further carry out computer experiments identifying the role of plasma introduced wave loss on the result of frequency downshift phenomenon.

  8. Evaluation of SAR in a human body model due to wireless power transmission in the 10 MHz band.

    PubMed

    Laakso, Ilkka; Tsuchida, Shogo; Hirata, Akimasa; Kamimura, Yoshitsugu

    2012-08-07

    This study discusses a computational method for calculating the specific absorption rate (SAR) due to a wireless power transmission system in the 10 MHz frequency band. A two-step quasi-static method comprised of the method of moments and the scalar potential finite-difference method are proposed. The applicability of the quasi-static approximation for localized exposure in this frequency band is discussed by comparing the SAR in a lossy dielectric cylinder computed with a full-wave electromagnetic analysis and the quasi-static approximation. From the computational results, the input impedance of the resonant coils was affected by the existence of the cylinder. On the other hand, the magnetic field distribution in free space and considering the cylinder and an impedance matching circuit were in good agreement; the maximum difference in the amplitude of the magnetic field was 4.8%. For a cylinder-coil distance of 10 mm, the difference between the peak 10 g averaged SAR in the cylinder computed with the full-wave electromagnetic method and our quasi-static method was 7.8%. These results suggest that the quasi-static approach is applicable for conducting the dosimetry of wireless power transmission in the 10 MHz band. With our two-step quasi-static method, the SAR in the anatomically based model was computed for different exposure scenarios. From those computations, the allowable input power satisfying the limit of a peak 10 g averaged SAR of 2.0 W kg(-1) was 830 W in the worst case exposure scenario with a coil positioned at a distance of 30 mm from the chest.

  9. POWERLIB: SAS/IML Software for Computing Power in Multivariate Linear Models

    PubMed Central

    Johnson, Jacqueline L.; Muller, Keith E.; Slaughter, James C.; Gurka, Matthew J.; Gribbin, Matthew J.; Simpson, Sean L.

    2014-01-01

    The POWERLIB SAS/IML software provides convenient power calculations for a wide range of multivariate linear models with Gaussian errors. The software includes the Box, Geisser-Greenhouse, Huynh-Feldt, and uncorrected tests in the “univariate” approach to repeated measures (UNIREP), the Hotelling Lawley Trace, Pillai-Bartlett Trace, and Wilks Lambda tests in “multivariate” approach (MULTIREP), as well as a limited but useful range of mixed models. The familiar univariate linear model with Gaussian errors is an important special case. For estimated covariance, the software provides confidence limits for the resulting estimated power. All power and confidence limits values can be output to a SAS dataset, which can be used to easily produce plots and tables for manuscripts. PMID:25400516

  10. Cognitive Aspects of Power in a Two-Level Game

    NASA Astrophysics Data System (ADS)

    Juvina, Ion; Lebiere, Christian; Martin, Jolie; Gonzalez, Cleotilde

    The Intergroup Prisoner's Dilemma with Intragroup Power Dynamics (IPD^2) is a new game paradigm for studying human behavior in conflict situations. IPD^2 adds the concept of intragroup power to an intergroup version of the standard Iterated Prisoner's Dilemma game. We conducted an exploratory laboratory study in which individual human participants played the game against computer strategies of various complexities. We also developed a cognitive model of human decision making in this game. The model was run in place of the human participant under the same conditions as in the laboratory study. Results from the human study and the model simulations are presented and discussed, emphasizing the value of including intragroup power in game theoretic models of conflict.

  11. The Real-World Connection.

    ERIC Educational Resources Information Center

    Estes, Charles R.

    1994-01-01

    Discusses theoretical versus applied science and the use of the scientific method for analysis of social issues. Topics addressed include the use of simulation and modeling; the growth in computer power, including nanotechnology; distributed computing; self-evolving programs; spiritual matters; human engineering, i.e., molding individuals;…

  12. Power throttling of collections of computing elements

    DOEpatents

    Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  13. Computers for real time flight simulation: A market survey

    NASA Technical Reports Server (NTRS)

    Bekey, G. A.; Karplus, W. J.

    1977-01-01

    An extensive computer market survey was made to determine those available systems suitable for current and future flight simulation studies at Ames Research Center. The primary requirement is for the computation of relatively high frequency content (5 Hz) math models representing powered lift flight vehicles. The Rotor Systems Research Aircraft (RSRA) was used as a benchmark vehicle for computation comparison studies. The general nature of helicopter simulations and a description of the benchmark model are presented, and some of the sources of simulation difficulties are examined. A description of various applicable computer architectures is presented, along with detailed discussions of leading candidate systems and comparisons between them.

  14. Computer Code for Interpreting 13C NMR Relaxation Measurements with Specific Models of Molecular Motion: The Rigid Isotropic and Symmetric Top Rotor Models and the Flexible Symmetric Top Rotor Model

    DTIC Science & Technology

    2017-01-01

    unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT: Carbon-13 nuclear magnetic resonance (13C NMR) spectroscopy is a powerful technique for...FLEXIBLE SYMMETRIC TOP ROTOR MODEL 1. INTRODUCTION Nuclear magnetic resonance (NMR) spectroscopy is a tremendously powerful technique for...application of NMR spectroscopy concerns the property of molecular motion, which is related to many physical, and even biological, functions of molecules in

  15. Computation in generalised probabilisitic theories

    NASA Astrophysics Data System (ADS)

    Lee, Ciarán M.; Barrett, Jonathan

    2015-08-01

    From the general difficulty of simulating quantum systems using classical systems, and in particular the existence of an efficient quantum algorithm for factoring, it is likely that quantum computation is intrinsically more powerful than classical computation. At present, the best upper bound known for the power of quantum computation is that {{BQP}}\\subseteq {{AWPP}}, where {{AWPP}} is a classical complexity class (known to be included in {{PP}}, hence {{PSPACE}}). This work investigates limits on computational power that are imposed by simple physical, or information theoretic, principles. To this end, we define a circuit-based model of computation in a class of operationally-defined theories more general than quantum theory, and ask: what is the minimal set of physical assumptions under which the above inclusions still hold? We show that given only an assumption of tomographic locality (roughly, that multipartite states and transformations can be characterized by local measurements), efficient computations are contained in {{AWPP}}. This inclusion still holds even without assuming a basic notion of causality (where the notion is, roughly, that probabilities for outcomes cannot depend on future measurement choices). Following Aaronson, we extend the computational model by allowing post-selection on measurement outcomes. Aaronson showed that the corresponding quantum complexity class, {{PostBQP}}, is equal to {{PP}}. Given only the assumption of tomographic locality, the inclusion in {{PP}} still holds for post-selected computation in general theories. Hence in a world with post-selection, quantum theory is optimal for computation in the space of all operational theories. We then consider whether one can obtain relativized complexity results for general theories. It is not obvious how to define a sensible notion of a computational oracle in the general framework that reduces to the standard notion in the quantum case. Nevertheless, it is possible to define computation relative to a ‘classical oracle’. Then, we show there exists a classical oracle relative to which efficient computation in any theory satisfying the causality assumption does not include {{NP}}.

  16. Scaffolding Learning by Modelling: The Effects of Partially Worked-out Models

    ERIC Educational Resources Information Center

    Mulder, Yvonne G.; Bollen, Lars; de Jong, Ton; Lazonder, Ard W.

    2016-01-01

    Creating executable computer models is a potentially powerful approach to science learning. Learning by modelling is also challenging because students can easily get overwhelmed by the inherent complexities of the task. This study investigated whether offering partially worked-out models can facilitate students' modelling practices and promote…

  17. Data centers as dispatchable loads to harness stranded power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kibaek; Yang, Fan; Zavala, Victor M.

    Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less

  18. Data centers as dispatchable loads to harness stranded power

    DOE PAGES

    Kim, Kibaek; Yang, Fan; Zavala, Victor M.; ...

    2016-07-20

    Here, we analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast,more » optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study, we find that these dispatchable computing loads are powered to 60-80% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.« less

  19. Village power options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lilienthal, P.

    1997-12-01

    This paper describes three different computer codes which have been written to model village power applications. The reasons which have driven the development of these codes include: the existance of limited field data; diverse applications can be modeled; models allow cost and performance comparisons; simulations generate insights into cost structures. The models which are discussed are: Hybrid2, a public code which provides detailed engineering simulations to analyze the performance of a particular configuration; HOMER - the hybrid optimization model for electric renewables - which provides economic screening for sensitivity analyses; and VIPOR the village power model - which is amore » network optimization model for comparing mini-grids to individual systems. Examples of the output of these codes are presented for specific applications.« less

  20. Numerical renormalization group method for entanglement negativity at finite temperature

    NASA Astrophysics Data System (ADS)

    Shim, Jeongmin; Sim, H.-S.; Lee, Seung-Sup B.

    2018-04-01

    We develop a numerical method to compute the negativity, an entanglement measure for mixed states, between the impurity and the bath in quantum impurity systems at finite temperature. We construct a thermal density matrix by using the numerical renormalization group (NRG), and evaluate the negativity by implementing the NRG approximation that reduces computational cost exponentially. We apply the method to the single-impurity Kondo model and the single-impurity Anderson model. In the Kondo model, the negativity exhibits a power-law scaling at temperature much lower than the Kondo temperature and a sudden death at high temperature. In the Anderson model, the charge fluctuation of the impurity contributes to the negativity even at zero temperature when the on-site Coulomb repulsion of the impurity is finite, while at low temperature the negativity between the impurity spin and the bath exhibits the same power-law scaling behavior as in the Kondo model.

  1. EUV/soft x-ray spectra for low B neutron stars

    NASA Technical Reports Server (NTRS)

    Romani, Roger W.; Rajagopal, Mohan; Rogers, Forrest J.; Iglesias, Carlos A.

    1995-01-01

    Recent ROSAT and EUVE detections of spin-powered neutron stars suggest that many emit 'thermal' radiation, peaking in the EUV/soft X-ray band. These data constrain the neutron stars' thermal history, but interpretation requires comparison with model atmosphere computations, since emergent spectra depend strongly on the surface composition and magnetic field. As recent opacity computations show substantial change to absorption cross sections at neutron star photospheric conditions, we report here on new model atmosphere computations employing such data. The results are compared with magnetic atmosphere models and applied to PSR J0437-4715, a low field neutron star.

  2. Linear Power-Flow Models in Multiphase Distribution Networks: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, Andrey; Dall'Anese, Emiliano

    This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- frommore » advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.« less

  3. Experiences Integrating Transmission and Distribution Simulations for DERs with the Integrated Grid Modeling System (IGMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias

    2016-08-11

    This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less

  4. Simulations of NOx Emissions from Low Emissions Discrete Jet Injector Combustor Tests

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Breisacher, Kevin

    2014-01-01

    An experimental and computational study was conducted to evaluate the performance and emissions characteristics of a candidate Lean Direct Injection (LDI) combustor configuration with a mix of simplex and airblast injectors. The National Combustion Code (NCC) was used to predict the experimentally measured EINOx emissions for test conditions representing low power, medium power, and high-power engine cycle conditions. Of the six cases modeled with the NCC using a reduced-kinetics finite-rate mechanism and lagrangian spray modeling, reasonable predictions of combustor exit temperature and EINOx were obtained at two high-power cycle conditions.

  5. Benefit-cost methodology study with example application of the use of wind generators

    NASA Technical Reports Server (NTRS)

    Zimmer, R. P.; Justus, C. G.; Mason, R. M.; Robinette, S. L.; Sassone, P. G.; Schaffer, W. A.

    1975-01-01

    An example application for cost-benefit methodology is presented for the use of wind generators. The approach adopted for the example application consisted of the following activities: (1) surveying of the available wind data and wind power system information, (2) developing models which quantitatively described wind distributions, wind power systems, and cost-benefit differences between conventional systems and wind power systems, and (3) applying the cost-benefit methodology to compare a conventional electrical energy generation system with systems which included wind power generators. Wind speed distribution data were obtained from sites throughout the contiguous United States and were used to compute plant factor contours shown on an annual and seasonal basis. Plant factor values (ratio of average output power to rated power) are found to be as high as 0.6 (on an annual average basis) in portions of the central U. S. and in sections of the New England coastal area. Two types of wind power systems were selected for the application of the cost-benefit methodology. A cost-benefit model was designed and implemented on a computer to establish a practical tool for studying the relative costs and benefits of wind power systems under a variety of conditions and to efficiently and effectively perform associated sensitivity analyses.

  6. Computational neuropharmacology: dynamical approaches in drug discovery.

    PubMed

    Aradi, Ildiko; Erdi, Péter

    2006-05-01

    Computational approaches that adopt dynamical models are widely accepted in basic and clinical neuroscience research as indispensable tools with which to understand normal and pathological neuronal mechanisms. Although computer-aided techniques have been used in pharmaceutical research (e.g. in structure- and ligand-based drug design), the power of dynamical models has not yet been exploited in drug discovery. We suggest that dynamical system theory and computational neuroscience--integrated with well-established, conventional molecular and electrophysiological methods--offer a broad perspective in drug discovery and in the search for novel targets and strategies for the treatment of neurological and psychiatric diseases.

  7. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    NASA Astrophysics Data System (ADS)

    Shaw, Amelia R.; Smith Sawyer, Heather; LeBoeuf, Eugene J.; McDonald, Mark P.; Hadjerioua, Boualem

    2017-11-01

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2 is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. The reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.

  8. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE PAGES

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.; ...

    2017-10-24

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  9. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  10. Capability of GPGPU for Faster Thermal Analysis Used in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Takaki, Ryoji; Akita, Takeshi; Shima, Eiji

    A thermal mathematical model plays an important role in operations on orbit as well as spacecraft thermal designs. The thermal mathematical model has some uncertain thermal characteristic parameters, such as thermal contact resistances between components, effective emittances of multilayer insulation (MLI) blankets, discouraging make up efficiency and accuracy of the model. A particle filter which is one of successive data assimilation methods has been applied to construct spacecraft thermal mathematical models. This method conducts a lot of ensemble computations, which require large computational power. Recently, General Purpose computing in Graphics Processing Unit (GPGPU) has been attracted attention in high performance computing. Therefore GPGPU is applied to increase the computational speed of thermal analysis used in the particle filter. This paper shows the speed-up results by using GPGPU as well as the application method of GPGPU.

  11. Deterrence at the Operational Level of War

    DTIC Science & Technology

    2011-01-01

    edited, with Barry Blechman, Making Defense Reform Work (Brassey’s, 1990) and has also authored numerous books and articles . Deterrence at the...computational power. In contrast, many models of rational inference view the mind as if it were a supernatural being possessing demonic powers of reason

  12. Computational Nanoelectronics and Nanotechnology at NASA ARC

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Kutler, Paul (Technical Monitor)

    1998-01-01

    Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technology are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotechnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.

  13. Computational Nanoelectronics and Nanotechnology at NASA ARC

    NASA Technical Reports Server (NTRS)

    Saini, Subhash

    1998-01-01

    Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technolpgy are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotecnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.

  14. Algorithmic-Reducibility = Renormalization-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') Replacing CRUTCHES!!!: Gauss Modular/Clock-Arithmetic Congruences = Signal X Noise PRODUCTS..

    NASA Astrophysics Data System (ADS)

    Siegel, J.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!

  15. Visual Analytics for Power Grid Contingency Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak C.; Huang, Zhenyu; Chen, Yousu

    2014-01-20

    Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure tomore » do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.« less

  16. Computational Investigation of a Boundary-Layer Ingesting Propulsion System for the Common Research Model

    NASA Technical Reports Server (NTRS)

    Blumenthal, Brennan T.; Elmiligui, Alaa; Geiselhart, Karl A.; Campbell, Richard L.; Maughmer, Mark D.; Schmitz, Sven

    2016-01-01

    The present paper examines potential propulsive and aerodynamic benefits of integrating a Boundary-Layer Ingestion (BLI) propulsion system into a typical commercial aircraft using the Common Research Model (CRM) geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment is used to generate engine conditions for CFD analysis. Improvements to the BLI geometry are made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method, and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2 deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.4% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from Boundary-Layer Ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.

  17. Computational Investigation of a Boundary-Layer Ingestion Propulsion System for the Common Research Model

    NASA Technical Reports Server (NTRS)

    Blumenthal, Brennan

    2016-01-01

    This thesis will examine potential propulsive and aerodynamic benefits of integrating a boundary-layer ingestion (BLI) propulsion system with a typical commercial aircraft using the Common Research Model geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment will be used to generate engine conditions for CFD analysis. Improvements to the BLI geometry will be made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.3% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from boundary-layer ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.

  18. Development of hybrid computer plasma models for different pressure regimes

    NASA Astrophysics Data System (ADS)

    Hromadka, Jakub; Ibehej, Tomas; Hrach, Rudolf

    2016-09-01

    With increased performance of contemporary computers during last decades numerical simulations became a very powerful tool applicable also in plasma physics research. Plasma is generally an ensemble of mutually interacting particles that is out of the thermodynamic equilibrium and for this reason fluid computer plasma models give results with only limited accuracy. On the other hand, much more precise particle models are often limited only on 2D problems because of their huge demands on the computer resources. Our contribution is devoted to hybrid modelling techniques that combine advantages of both modelling techniques mentioned above, particularly to their so-called iterative version. The study is focused on mutual relations between fluid and particle models that are demonstrated on the calculations of sheath structures of low temperature argon plasma near a cylindrical Langmuir probe for medium and higher pressures. Results of a simple iterative hybrid plasma computer model are also given. The authors acknowledge the support of the Grant Agency of Charles University in Prague (project 220215).

  19. Analysis of in situ electric field and specific absorption rate in human models for wireless power transfer system with induction coupling.

    PubMed

    Sunohara, Tetsu; Hirata, Akimasa; Laakso, Ilkka; Onishi, Teruo

    2014-07-21

    This study investigates the specific absorption rate (SAR) and the in situ electric field in anatomically based human models for the magnetic field from an inductive wireless power transfer system developed on the basis of the specifications of the wireless power consortium. The transfer system consists of two induction coils covered by magnetic sheets. Both the waiting and charging conditions are considered. The transfer frequency considered in this study is 140 kHz, which is within the range where the magneto-quasi-static approximation is valid. The SAR and in situ electric field in the chest and arm of the models are calculated by numerically solving the scalar potential finite difference equation. The electromagnetic modelling of the coils in the wireless power transfer system is verified by comparing the computed and measured magnetic field distributions. The results indicate that the peak value of the SAR averaged over a 10 g of tissue and that of the in situ electric field are 72 nW kg(-1) and 91 mV m(-1) for a transmitted power of 1 W, Consequently, the maximum allowable transmitted powers satisfying the exposure limits of the SAR (2 W kg(-1)) and the in situ electric field (18.9 V m(-1)) are found to be 28 MW and 43 kW. The computational results show that the in situ electric field in the chest is the most restrictive factor when compliance with the wireless power transfer system is evaluated according to international guidelines.

  20. Proactive monitoring of an onshore wind farm through lidar measurements, SCADA data and a data-driven RANS solver

    NASA Astrophysics Data System (ADS)

    Iungo, Giacomo Valerio; Camarri, Simone; Ciri, Umberto; El-Asha, Said; Leonardi, Stefano; Rotea, Mario A.; Santhanagopalan, Vignesh; Viola, Francesco; Zhan, Lu

    2016-11-01

    Site conditions, such as topography and local climate, as well as wind farm layout strongly affect performance of a wind power plant. Therefore, predictions of wake interactions and their effects on power production still remain a great challenge in wind energy. For this study, an onshore wind turbine array was monitored through lidar measurements, SCADA and met-tower data. Power losses due to wake interactions were estimated to be approximately 4% and 2% of the total power production under stable and convective conditions, respectively. This dataset was then leveraged for the calibration of a data driven RANS (DDRANS) solver, which is a compelling tool for prediction of wind turbine wakes and power production. DDRANS is characterized by a computational cost as low as that for engineering wake models, and adequate accuracy achieved through data-driven tuning of the turbulence closure model. DDRANS is based on a parabolic formulation, axisymmetry and boundary layer approximations, which allow achieving low computational costs. The turbulence closure model consists in a mixing length model, which is optimally calibrated with the experimental dataset. Assessment of DDRANS is then performed through lidar and SCADA data for different atmospheric conditions. This material is based upon work supported by the National Science Foundation under the I/UCRC WindSTAR, NSF Award IIP 1362033.

  1. Computer simulation: A modern day crystal ball?

    NASA Technical Reports Server (NTRS)

    Sham, Michael; Siprelle, Andrew

    1994-01-01

    It has long been the desire of managers to be able to look into the future and predict the outcome of decisions. With the advent of computer simulation and the tremendous capability provided by personal computers, that desire can now be realized. This paper presents an overview of computer simulation and modeling, and discusses the capabilities of Extend. Extend is an iconic-driven Macintosh-based software tool that brings the power of simulation to the average computer user. An example of an Extend based model is presented in the form of the Space Transportation System (STS) Processing Model. The STS Processing Model produces eight shuttle launches per year, yet it takes only about ten minutes to run. In addition, statistical data such as facility utilization, wait times, and processing bottlenecks are produced. The addition or deletion of resources, such as orbiters or facilities, can be easily modeled and their impact analyzed. Through the use of computer simulation, it is possible to look into the future to see the impact of today's decisions.

  2. A Computational Model for Predicting Gas Breakdown

    NASA Astrophysics Data System (ADS)

    Gill, Zachary

    2017-10-01

    Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.

  3. Interior Noise Predictions in the Preliminary Design of the Large Civil Tiltrotor (LCTR2)

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.; Cabell, Randolph H.; Boyd, David D.

    2013-01-01

    A prediction scheme was established to compute sound pressure levels in the interior of a simplified cabin model of the second generation Large Civil Tiltrotor (LCTR2) during cruise conditions, while being excited by turbulent boundary layer flow over the fuselage, or by tiltrotor blade loading and thickness noise. Finite element models of the cabin structure, interior acoustic space, and acoustically absorbent (poro-elastic) materials in the fuselage were generated and combined into a coupled structural-acoustic model. Fluctuating power spectral densities were computed according to the Efimtsov turbulent boundary layer excitation model. Noise associated with the tiltrotor blades was predicted in the time domain as fluctuating surface pressures and converted to power spectral densities at the fuselage skin finite element nodes. A hybrid finite element (FE) approach was used to compute the low frequency acoustic cabin response over the frequency range 6-141 Hz with a 1 Hz bandwidth, and the Statistical Energy Analysis (SEA) approach was used to predict the interior noise for the 125-8000 Hz one-third octave bands.

  4. Cook-Levin Theorem Algorithmic-Reducibility/Completeness = Wilson Renormalization-(Semi)-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') REPLACING CRUTCHES!!!: Models: Turing-machine, finite-state-models, finite-automata

    NASA Astrophysics Data System (ADS)

    Young, Frederic; Siegel, Edward

    Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!

  5. Theoretical assessment of the maximum obtainable power in wireless power transfer constrained by human body exposure limits in a typical room scenario.

    PubMed

    Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai

    2014-07-07

    In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.

  6. Theoretical assessment of the maximum obtainable power in wireless power transfer constrained by human body exposure limits in a typical room scenario

    NASA Astrophysics Data System (ADS)

    Chen, Xi Lin; De Santis, Valerio; Esai Umenei, Aghuinyue

    2014-07-01

    In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.

  7. Predicting Cloud Computing Technology Adoption by Organizations: An Empirical Integration of Technology Acceptance Model and Theory of Planned Behavior

    ERIC Educational Resources Information Center

    Ekufu, ThankGod K.

    2012-01-01

    Organizations are finding it difficult in today's economy to implement the vast information technology infrastructure required to effectively conduct their business operations. Despite the fact that some of these organizations are leveraging on the computational powers and the cost-saving benefits of computing on the Internet cloud, others…

  8. Massive Exploration of Perturbed Conditions of the Blood Coagulation Cascade through GPU Parallelization

    PubMed Central

    Cazzaniga, Paolo; Nobile, Marco S.; Besozzi, Daniela; Bellini, Matteo; Mauri, Giancarlo

    2014-01-01

    The introduction of general-purpose Graphics Processing Units (GPUs) is boosting scientific applications in Bioinformatics, Systems Biology, and Computational Biology. In these fields, the use of high-performance computing solutions is motivated by the need of performing large numbers of in silico analysis to study the behavior of biological systems in different conditions, which necessitate a computing power that usually overtakes the capability of standard desktop computers. In this work we present coagSODA, a CUDA-powered computational tool that was purposely developed for the analysis of a large mechanistic model of the blood coagulation cascade (BCC), defined according to both mass-action kinetics and Hill functions. coagSODA allows the execution of parallel simulations of the dynamics of the BCC by automatically deriving the system of ordinary differential equations and then exploiting the numerical integration algorithm LSODA. We present the biological results achieved with a massive exploration of perturbed conditions of the BCC, carried out with one-dimensional and bi-dimensional parameter sweep analysis, and show that GPU-accelerated parallel simulations of this model can increase the computational performances up to a 181× speedup compared to the corresponding sequential simulations. PMID:25025072

  9. Design and characterisation of a phased antenna array for intact breast hyperthermia.

    PubMed

    Curto, Sergio; Garcia-Miquel, Aleix; Suh, Minyoung; Vidal, Neus; Lopez-Villegas, Jose M; Prakash, Punit

    2018-05-01

    Currently available hyperthermia technology is not well suited to treating cancer malignancies in the intact breast. This study investigates a microwave applicator incorporating multiple patch antennas, with the goal of facilitating controllable power deposition profiles for treating lesions at diverse locations within the intact breast. A 3D-computational model was implemented to assess power deposition profiles with 915 MHz applicators incorporating a hemispheric groundplane and configurations of 2, 4, 8, 12, 16 and 20 antennas. Hemispheric breast models of 90 mm and 150 mm diameter were considered, where cuboid target volumes of 10 mm edge length (1 cm 3 ) and 30 mm edge length (27 cm 3 ) were positioned at the centre of the breast, and also located 15 mm from the chest wall. The average power absorption (αPA) ratio expressed as the ratio of the PA in the target volume and in the full breast was evaluated. A 4-antenna proof-of-concept array was fabricated and experimentally evaluated. Computational models identified an optimal inter-antenna spacing of 22.5° along the applicator circumference. Applicators with 8 and 12 antennas excited with constant phase presented the highest αPA at centrally located and deep-seated targets, respectively. Experimental measurements with a 4-antenna proof-of-concept array illustrated the potential for electrically steering power deposition profiles by adjusting the relative phase of the signal at antenna inputs. Computational models and experimental results suggest that the proposed applicator may have potential for delivering conformal thermal therapy in the intact breast.

  10. Efficient Bayesian mixed model analysis increases association power in large cohorts

    PubMed Central

    Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L

    2014-01-01

    Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633

  11. Power-constrained supercomputing

    NASA Astrophysics Data System (ADS)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.

  12. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less

  13. Equation-based languages – A new paradigm for building energy modeling, simulation and optimization

    DOE PAGES

    Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.

    2016-04-01

    Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less

  14. Simulating the Water Use of Thermoelectric Power Plants in the United States: Model Development and Verification

    NASA Astrophysics Data System (ADS)

    Betrie, G.; Yan, E.; Clark, C.

    2016-12-01

    Thermoelectric power plants use the highest amount of freshwater second to the agriculture sector. However, there is scarcity of information that characterizes the freshwater use of these plants in the United States. This could be attributed to the lack of model and data that are required to conduct analysis and gain insights. The competition for freshwater among sectors will increase in the future as the amount of freshwater gets limited due climate change and population growth. A model that makes use of less data is urgently needed to conduct analysis and identify adaptation strategies. The objectives of this study are to develop a model and simulate the water use of thermoelectric power plants in the United States. The developed model has heat-balance, climate, cooling system, and optimization modules. It computes the amount of heat rejected to the environment, estimates the quantity of heat exchanged through latent and sensible heat to the environment, and computes the amount of water required per unit generation of electricity. To verify the model, we simulated a total of 876 fossil-fired, nuclear and gas-turbine power plants with different cooling systems (CS) using 2010-2014 data obtained from Energy Information Administration. The CS includes once-through with cooling pond, once-through without cooling ponds, recirculating with induced draft and recirculating with induced draft natural draft. The results show that the model reproduced the observed water use per unit generation of electricity for the most of the power plants. It is also noticed that the model slightly overestimates the water use during the summer period when the input water temperatures are higher. We are investigating the possible reasons for the overestimation and address it in the future work. The model could be used individually or coupled to regional models to analyze various adaptation strategies and improve the water use efficiency of thermoelectric power plants.

  15. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  16. Multiple-Swarm Ensembles: Improving the Predictive Power and Robustness of Predictive Models and Its Use in Computational Biology.

    PubMed

    Alves, Pedro; Liu, Shuang; Wang, Daifeng; Gerstein, Mark

    2018-01-01

    Machine learning is an integral part of computational biology, and has already shown its use in various applications, such as prognostic tests. In the last few years in the non-biological machine learning community, ensembling techniques have shown their power in data mining competitions such as the Netflix challenge; however, such methods have not found wide use in computational biology. In this work, we endeavor to show how ensembling techniques can be applied to practical problems, including problems in the field of bioinformatics, and how they often outperform other machine learning techniques in both predictive power and robustness. Furthermore, we develop a methodology of ensembling, Multi-Swarm Ensemble (MSWE) by using multiple particle swarm optimizations and demonstrate its ability to further enhance the performance of ensembles.

  17. A computationally efficient technique to model depth, orientation and alignment via ray tracing in acoustic power transfer systems

    NASA Astrophysics Data System (ADS)

    Christensen, David B.; Basaeri, Hamid; Roundy, Shad

    2017-12-01

    In acoustic power transfer systems, a receiver is displaced from a transmitter by an axial depth, a lateral offset (alignment), and a rotation angle (orientation). In systems where the receiver’s position is not fixed, such as a receiver implanted in biological tissue, slight variations in depth, orientation, or alignment can cause significant variations in the received voltage and power. To address this concern, this paper presents a computationally efficient technique to model the effects of depth, orientation, and alignment via ray tracing (DOART) on received voltage and power in acoustic power transfer systems. DOART combines transducer circuit equivalent models, a modified version of Huygens principle, and ray tracing to simulate pressure wave propagation and reflection between a transmitter and a receiver in a homogeneous medium. A reflected grid method is introduced to calculate propagation distances, reflection coefficients, and initial vectors between a point on the transmitter and a point on the receiver for an arbitrary number of reflections. DOART convergence and simulation time per data point is discussed as a function of the number of reflections and elements chosen. Finally, experimental data is compared to DOART simulation data in terms of magnitude and shape of the received voltage signal.

  18. Rich client data exploration and research prototyping for NOAA

    NASA Astrophysics Data System (ADS)

    Grossberg, Michael; Gladkova, Irina; Guch, Ingrid; Alabi, Paul; Shahriar, Fazlul; Bonev, George; Aizenman, Hannah

    2009-08-01

    Data from satellites and model simulations is increasing exponentially as observations and model computing power improve rapidly. Not only is technology producing more data, but it often comes from sources all over the world. Researchers and scientists who must collaborate are also located globally. This work presents a software design and technologies which will make it possible for groups of researchers to explore large data sets visually together without the need to download these data sets locally. The design will also make it possible to exploit high performance computing remotely and transparently to analyze and explore large data sets. Computer power, high quality sensing, and data storage capacity have improved at a rate that outstrips our ability to develop software applications that exploit these resources. It is impractical for NOAA scientists to download all of the satellite and model data that may be relevant to a given problem and the computing environments available to a given researcher range from supercomputers to only a web browser. The size and volume of satellite and model data are increasing exponentially. There are at least 50 multisensor satellite platforms collecting Earth science data. On the ground and in the sea there are sensor networks, as well as networks of ground based radar stations, producing a rich real-time stream of data. This new wealth of data would have limited use were it not for the arrival of large-scale high-performance computation provided by parallel computers, clusters, grids, and clouds. With these computational resources and vast archives available, it is now possible to analyze subtle relationships which are global, multi-modal and cut across many data sources. Researchers, educators, and even the general public, need tools to access, discover, and use vast data center archives and high performance computing through a simple yet flexible interface.

  19. Trash Diverter Orientation Angle Optimization at Run-Off River Type Hydro-power Plant using CFD

    NASA Astrophysics Data System (ADS)

    Munisamy, Kannan M.; Kamal, Ahmad; Shuaib, Norshah Hafeez; Yusoff, Mohd. Zamri; Hasini, Hasril; Rashid, Azri Zainol; Thangaraju, Savithry K.; Hamid, Hazha

    2010-06-01

    Tenom Pangi Hydro Power Station in Tenom, Sabah is suffering from poor river quality with a lot of suspended trashes. This problem necessitates the need for a trash diverter to divert the trash away from the intake region. Previously, a trash diverter (called Trash Diverter I) was installed at the site but managed to survived for a short period of time due to an impact with huge log as a results of a heavy flood. In the current project, a second trash diverter structure is designed (called Trash Diverter II) with improved features compared to Trash Diverter I. The Computational Fluid Dynamics (CFD) analysis is done to evaluate the river flow interaction onto the trash diverter from the fluid flow point of view, Computational Fluids Dynamics is a numerical approach to solve fluid flow profile for different inlet conditions. In this work, the river geometry is modeled using commercial CFD code, FLUENT®. The computational model consists of Reynolds Averaged Navier-Stokes (RANS) equations coupled with other related models using the properties of the fluids under investigation. The model is validated with site-measurements done at Tenom Pangi Hydro Power Station. Different operating condition of river flow rate and weir opening is also considered. The optimum angle is determined in this simulation to further use the data for 3D simulation and structural analysis.

  20. Jennifer van Rij | NREL

    Science.gov Websites

    Jennifer.Vanrij@nrel.gov | 303-384-7180 Jennifer's expertise is in developing computational modeling methods for collaboratively developing numerical modeling methods to simulate the hydrodynamic, structural dynamic, power -elastic interactions. Her other diverse work experiences include developing numerical modeling methods for

  1. Nick Kincaid | NREL

    Science.gov Websites

    from Colorado School of Mines. His research interests include optical modeling, computational fluid dynamics, and heat transfer. His work involves optical performance modeling of concentrating solar power experience includes developing thermal and optical models of CSP components at Norwich Solar Technologies

  2. A Test of Thick-Target Nonuniform Ionization as an Explanation for Breaks in Solar Flare Hard X-Ray Spectra

    NASA Technical Reports Server (NTRS)

    Holman, gordon; Dennis Brian R.; Tolbert, Anne K.; Schwartz, Richard

    2010-01-01

    Solar nonthermal hard X-ray (HXR) flare spectra often cannot be fitted by a single power law, but rather require a downward break in the photon spectrum. A possible explanation for this spectral break is nonuniform ionization in the emission region. We have developed a computer code to calculate the photon spectrum from electrons with a power-law distribution injected into a thick-target in which the ionization decreases linearly from 100% to zero. We use the bremsstrahlung cross-section from Haug (1997), which closely approximates the full relativistic Bethe-Heitler cross-section, and compare photon spectra computed from this model with those obtained by Kontar, Brown and McArthur (2002), who used a step-function ionization model and the Kramers approximation to the cross-section. We find that for HXR spectra from a target with nonuniform ionization, the difference (Delta-gamma) between the power-law indexes above and below the break has an upper limit between approx.0.2 and 0.7 that depends on the power-law index delta of the injected electron distribution. A broken power-law spectrum with a. higher value of Delta-gamma cannot result from nonuniform ionization alone. The model is applied to spectra obtained around the peak times of 20 flares observed by the Ramaty High Energy Solar Spectroscopic Imager (RHESSI from 2002 to 2004 to determine whether thick-target nonuniform ionization can explain the measured spectral breaks. A Monte Carlo method is used to determine the uncertainties of the best-fit parameters, especially on Delta-gamma. We find that 15 of the 20 flare spectra require a downward spectral break and that at least 6 of these could not be explained by nonuniform ionization alone because they had values of Delta-gamma with less than a 2.5% probability of being consistent with the computed upper limits from the model. The remaining 9 flare spectra, based on this criterion, are consistent with the nonuniform ionization model.

  3. Computer experimental analysis of the CHP performance of a 100 kW e SOFC Field Unit by a factorial design

    NASA Astrophysics Data System (ADS)

    Calì, M.; Santarelli, M. G. L.; Leone, P.

    Gas Turbine Technologies (GTT) and Politecnico di Torino, both located in Torino (Italy), have been involved in the design and installation of a SOFC laboratory in order to analyse the operation, in cogenerative configuration, of the CHP 100 kW e SOFC Field Unit, built by Siemens-Westinghouse Power Corporation (SWPC), which is at present (May 2005) starting its operation and which will supply electric and thermal power to the GTT factory. In order to take the better advantage from the analysis of the on-site operation, and especially to correctly design the scheduled experimental tests on the system, we developed a mathematical model and run a simulated experimental campaign, applying a rigorous statistical approach to the analysis of the results. The aim of this work is the computer experimental analysis, through a statistical methodology (2 k factorial experiments), of the CHP 100 performance. First, the mathematical model has been calibrated with the results acquired during the first CHP100 demonstration at EDB/ELSAM in Westerwoort. After, the simulated tests have been performed in the form of computer experimental session, and the measurement uncertainties have been simulated with perturbation imposed to the model independent variables. The statistical methodology used for the computer experimental analysis is the factorial design (Yates' Technique): using the ANOVA technique the effect of the main independent variables (air utilization factor U ox, fuel utilization factor U F, internal fuel and air preheating and anodic recycling flow rate) has been investigated in a rigorous manner. Analysis accounts for the effects of parameters on stack electric power, thermal recovered power, single cell voltage, cell operative temperature, consumed fuel flow and steam to carbon ratio. Each main effect and interaction effect of parameters is shown with particular attention on generated electric power and stack heat recovered.

  4. Delivering better power: the role of simulation in reducing the environmental impact of aircraft engines.

    PubMed

    Menzies, Kevin

    2014-08-13

    The growth in simulation capability over the past 20 years has led to remarkable changes in the design process for gas turbines. The availability of relatively cheap computational power coupled to improvements in numerical methods and physical modelling in simulation codes have enabled the development of aircraft propulsion systems that are more powerful and yet more efficient than ever before. However, the design challenges are correspondingly greater, especially to reduce environmental impact. The simulation requirements to achieve a reduced environmental impact are described along with the implications of continued growth in available computational power. It is concluded that achieving the environmental goals will demand large-scale multi-disciplinary simulations requiring significantly increased computational power, to enable optimization of the airframe and propulsion system over the entire operational envelope. However even with massive parallelization, the limits imposed by communications latency will constrain the time required to achieve a solution, and therefore the position of such large-scale calculations in the industrial design process. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  5. Reducing software mass through behavior control. [of planetary roving robots

    NASA Technical Reports Server (NTRS)

    Miller, David P.

    1992-01-01

    Attention is given to the tradeoff between communication and computation as regards a planetary rover (both these subsystems are very power-intensive, and both can be the major driver of the rover's power subsystem, and therefore the minimum mass and size of the rover). Software techniques that can be used to reduce the requirements on both communciation and computation, allowing the overall robot mass to be greatly reduced, are discussed. Novel approaches to autonomous control, called behavior control, employ an entirely different approach, and for many tasks will yield a similar or superior level of autonomy to traditional control techniques, while greatly reducing the computational demand. Traditional systems have several expensive processes that operate serially, while behavior techniques employ robot capabilities that run in parallel. Traditional systems make extensive world models, while behavior control systems use minimal world models or none at all.

  6. Allan deviation computations of a linear frequency synthesizer system using frequency domain techniques

    NASA Technical Reports Server (NTRS)

    Wu, Andy

    1995-01-01

    Allan Deviation computations of linear frequency synthesizer systems have been reported previously using real-time simulations. Even though it takes less time compared with the actual measurement, it is still very time consuming to compute the Allan Deviation for long sample times with the desired confidence level. Also noises, such as flicker phase noise and flicker frequency noise, can not be simulated precisely. The use of frequency domain techniques can overcome these drawbacks. In this paper the system error model of a fictitious linear frequency synthesizer is developed and its performance using a Cesium (Cs) atomic frequency standard (AFS) as a reference is evaluated using frequency domain techniques. For a linear timing system, the power spectral density at the system output can be computed with known system transfer functions and known power spectral densities from the input noise sources. The resulting power spectral density can then be used to compute the Allan Variance at the system output. Sensitivities of the Allan Variance at the system output to each of its independent input noises are obtained, and they are valuable for design trade-off and trouble-shooting.

  7. Design of an Adaptive Power Regulation Mechanism and a Nozzle for a Hydroelectric Power Plant Turbine Test Rig

    NASA Astrophysics Data System (ADS)

    Mert, Burak; Aytac, Zeynep; Tascioglu, Yigit; Celebioglu, Kutay; Aradag, Selin; ETU Hydro Research Center Team

    2014-11-01

    This study deals with the design of a power regulation mechanism for a Hydroelectric Power Plant (HEPP) model turbine test system which is designed to test Francis type hydroturbines up to 2 MW power with varying head and flow(discharge) values. Unlike the tailor made regulation mechanisms of full-sized, functional HEPPs; the design for the test system must be easily adapted to various turbines that are to be tested. In order to achieve this adaptability, a dynamic simulation model is constructed in MATLAB/Simulink SimMechanics. This model acquires geometric data and hydraulic loading data of the regulation system from Autodesk Inventor CAD models and Computational Fluid Dynamics (CFD) analysis respectively. The dynamic model is explained and case studies of two different HEPPs are performed for validation. CFD aided design of the turbine guide vanes, which is used as input for the dynamic model, is also presented. This research is financially supported by Turkish Ministry of Development.

  8. Lunar PMAD technology assessment

    NASA Technical Reports Server (NTRS)

    Metcalf, Kenneth J.

    1992-01-01

    This report documents an initial set of power conditioning models created to generate 'ballpark' power management and distribution (PMAD) component mass and size estimates. It contains converter, rectifier, inverter, transformer, remote bus isolator (RBI), and remote power controller (RPC) models. These models allow certain studies to be performed; however, additional models are required to assess a full range of PMAD alternatives. The intent is to eventually form a library of PMAD models that will allow system designers to evaluate various power system architectures and distribution techniques quickly and consistently. The models in this report are designed primarily for space exploration initiative (SEI) missions requiring continuous power and supporting manned operations. The mass estimates were developed by identifying the stages in a component and obtaining mass breakdowns for these stages from near term electronic hardware elements. Technology advances were then incorporated to generate hardware masses consistent with the 2000 to 2010 time period. The mass of a complete component is computed by algorithms that calculate the masses of the component stages, control and monitoring, enclosure, and thermal management subsystem.

  9. Photovoltaic-Model-Based Solar Irradiance Estimators: Performance Comparison and Application to Maximum Power Forecasting

    NASA Astrophysics Data System (ADS)

    Scolari, Enrica; Sossan, Fabrizio; Paolone, Mario

    2018-01-01

    Due to the increasing proportion of distributed photovoltaic (PV) production in the generation mix, the knowledge of the PV generation capacity has become a key factor. In this work, we propose to compute the PV plant maximum power starting from the indirectly-estimated irradiance. Three estimators are compared in terms of i) ability to compute the PV plant maximum power, ii) bandwidth and iii) robustness against measurements noise. The approaches rely on measurements of the DC voltage, current, and cell temperature and on a model of the PV array. We show that the considered methods can accurately reconstruct the PV maximum generation even during curtailment periods, i.e. when the measured PV power is not representative of the maximum potential of the PV array. Performance evaluation is carried out by using a dedicated experimental setup on a 14.3 kWp rooftop PV installation. Results also proved that the analyzed methods can outperform pyranometer-based estimations, with a less complex sensing system. We show how the obtained PV maximum power values can be applied to train time series-based solar maximum power forecasting techniques. This is beneficial when the measured power values, commonly used as training, are not representative of the maximum PV potential.

  10. Redshift space clustering of galaxies and cold dark matter model

    NASA Technical Reports Server (NTRS)

    Bahcall, Neta A.; Cen, Renyue; Gramann, Mirt

    1993-01-01

    The distorting effect of peculiar velocities on the power speturm and correlation function of IRAS and optical galaxies is studied. The observed redshift space power spectra and correlation functions of IRAS and optical the galaxies over the entire range of scales are directly compared with the corresponding redshift space distributions using large-scale computer simulations of cold dark matter (CDM) models in order to study the distortion effect of peculiar velocities on the power spectrum and correlation function of the galaxies. It is found that the observed power spectrum of IRAS and optical galaxies is consistent with the spectrum of an Omega = 1 CDM model. The problems that such a model currently faces may be related more to the high value of Omega in the model than to the shape of the spectrum. A low-density CDM model is also investigated and found to be consistent with the data.

  11. Performance of wind turbines in a turbulent atmosphere

    NASA Technical Reports Server (NTRS)

    Sundar, R. M.; Sullivan, J. P.

    1981-01-01

    The effect of atmospheric turbulence on the power fluctuations of large wind turbines was studied. The significance of spatial non-uniformities of the wind is emphasized. The turbulent wind with correlation in time and space is simulated on the computer by Shinozukas method. The wind turbulence is modelled according to the Davenport spectrum with an exponential spatial correlation function. The rotor aerodynamics is modelled by simple blade element theory. Comparison of the spectrum of power output signal between 1-D and 3-D turbulence, shows the significant power fluctuations centered around the blade passage frequency.

  12. Performance and Power Optimization for Cognitive Processor Design Using Deep-Submicron Very Large Scale Integration (VLSI) Technology

    DTIC Science & Technology

    2010-03-01

    DATES COVERED (From - To) October 2008 – October 2009 4 . TITLE AND SUBTITLE PERFORMANCE AND POWER OPTIMIZATION FOR COGNITIVE PROCESSOR DESIGN USING...Computations 2  2.2  Cognitive Models and Algorithms for Intelligent Text Recognition 4   2.2.1 Brain-State-in-a-Box Neural Network Model. 4   2.2.2...The ASIC-style design and synthesis flow for FPU 8  Figure 4 : Screen shots of the final layouts 10  Figure 5: Projected performance and power roadmap

  13. A method of computer modelling the lithium-ion batteries aging process based on the experimental characteristics

    NASA Astrophysics Data System (ADS)

    Czerepicki, A.; Koniak, M.

    2017-06-01

    The paper presents a method of modelling the processes of aging lithium-ion batteries, its implementation as a computer application and results for battery state estimation. Authors use previously developed behavioural battery model, which was built using battery operating characteristics obtained from the experiment. This model was implemented in the form of a computer program using a database to store battery characteristics. Batteries aging process is a new extended functionality of the model. Algorithm of computer simulation uses a real measurements of battery capacity as a function of the battery charge and discharge cycles number. Simulation allows to take into account the incomplete cycles of charge or discharge battery, which are characteristic for transport powered by electricity. The developed model was used to simulate the battery state estimation for different load profiles, obtained by measuring the movement of the selected means of transport.

  14. Application of SLURM, BOINC, and GlusterFS as Software System for Sustainable Modeling and Data Analytics

    NASA Astrophysics Data System (ADS)

    Kashansky, Vladislav V.; Kaftannikov, Igor L.

    2018-02-01

    Modern numerical modeling experiments and data analytics problems in various fields of science and technology reveal a wide variety of serious requirements for distributed computing systems. Many scientific computing projects sometimes exceed the available resource pool limits, requiring extra scalability and sustainability. In this paper we share the experience and findings of our own on combining the power of SLURM, BOINC and GlusterFS as software system for scientific computing. Especially, we suggest a complete architecture and highlight important aspects of systems integration.

  15. The Future of Electronic Device Design: Device and Process Simulation Find Intelligence on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.

    1999-01-01

    We are on the path to meet the major challenges ahead for TCAD (technology computer aided design). The emerging computational grid will ultimately solve the challenge of limited computational power. The Modular TCAD Framework will solve the TCAD software challenge once TCAD software developers realize that there is no other way to meet industry's needs. The modular TCAD framework (MTF) also provides the ideal platform for solving the TCAD model challenge by rapid implementation of models in a partial differential solver.

  16. Advanced reliability modeling of fault-tolerant computer-based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1982-01-01

    Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.

  17. High Performance Computing for Modeling Wind Farms and Their Impact

    NASA Astrophysics Data System (ADS)

    Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.

    2016-12-01

    As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.

  18. Full cell simulation and the evaluation of the buffer system on air-cathode microbial fuel cell

    NASA Astrophysics Data System (ADS)

    Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.; Regan, John M.; Mench, Matthew M.

    2017-04-01

    This paper presents a computational model of a single chamber, air-cathode MFC. The model considers losses due to mass transport, as well as biological and electrochemical reactions, in both the anode and cathode half-cells. Computational fluid dynamics and Monod-Nernst analysis are incorporated into the reactions for the anode biofilm and cathode Pt catalyst and biofilm. The integrated model provides a macro-perspective of the interrelation between the anode and cathode during power production, while incorporating microscale contributions of mass transport within the anode and cathode layers. Model considerations include the effects of pH (H+/OH- transport) and electric field-driven migration on concentration overpotential, effects of various buffers and various amounts of buffer on the pH in the whole reactor, and overall impacts on the power output of the MFC. The simulation results fit the experimental polarization and power density curves well. Further, this model provides insight regarding mass transport at varying current density regimes and quantitative delineation of overpotentials at the anode and cathode. Overall, this comprehensive simulation is designed to accurately predict MFC performance based on fundamental fluid and kinetic relations and guide optimization of the MFC system.

  19. Application of enhanced modern structured analysis techniques to Space Station Freedom electric power system requirements

    NASA Technical Reports Server (NTRS)

    Biernacki, John; Juhasz, John; Sadler, Gerald

    1991-01-01

    A team of Space Station Freedom (SSF) system engineers are in the process of extensive analysis of the SSF requirements, particularly those pertaining to the electrical power system (EPS). The objective of this analysis is the development of a comprehensive, computer-based requirements model, using an enhanced modern structured analysis methodology (EMSA). Such a model provides a detailed and consistent representation of the system's requirements. The process outlined in the EMSA methodology is unique in that it allows the graphical modeling of real-time system state transitions, as well as functional requirements and data relationships, to be implemented using modern computer-based tools. These tools permit flexible updating and continuous maintenance of the models. Initial findings resulting from the application of EMSA to the EPS have benefited the space station program by linking requirements to design, providing traceability of requirements, identifying discrepancies, and fostering an understanding of the EPS.

  20. Software systems for modeling articulated figures

    NASA Technical Reports Server (NTRS)

    Phillips, Cary B.

    1989-01-01

    Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.

  1. Development of Intelligent Computer-Assisted Instruction Systems to Facilitate Reading Skills of Learning-Disabled Children

    DTIC Science & Technology

    1993-12-01

    Unclassified/Unlimited 13. ABSTRACT ~Maximum 2W0 worr*J The purpose of this thesis is to develop a high-level model to create seli"adapting software which...Department of Computer Science ABSTRACT The purpose of this thesis is to develop a high-level model to create self-adapting software which teaches learning...stimulating and demanding. The power of the system model described herein is that it can vary as needed by the individual student. The system will

  2. Compression of magnetized target in the magneto-inertial fusion

    NASA Astrophysics Data System (ADS)

    Kuzenov, V. V.

    2017-12-01

    This paper presents a mathematical model, numerical method and results of the computer analysis of the compression process and the energy transfer in the target plasma, used in magneto-inertial fusion. The computer simulation of the compression process of magnetized cylindrical target by high-power laser pulse is presented.

  3. Other Cosmic Ray Links

    Science.gov Websites

    curriculum for its course Physics In and Through Cosmology. The Distributed Observatory aims to become the world's largest cosmic ray telescope, using the distributed sensing and computing power of the world's cell phones. Modeled after the distributed computing efforts of SETI@Home and Folding@Home, the

  4. How Computer-Assisted Teaching in Physics Can Enhance Student Learning

    ERIC Educational Resources Information Center

    Karamustafaoglu, O.

    2012-01-01

    Simple harmonic motion (SHM) is an important topic for physics or science students and has wide applications all over the world. Computer simulations are applications of special interest in physics teaching because they support powerful modeling environments involving physics concepts. This article is aimed to compare the effect of…

  5. DEVELOPMENT AND APPLICATIONS OF CFD SIMULATIONS SUPPORTING URBAN AIR QUALITY AND HOMELAND SECURITY

    EPA Science Inventory

    Prior to September 11, 2001 developments of Computational Fluid Dynamics (CFD) were begun to support air quality applications. CFD models are emerging as a promising technology for such assessments, in part due to the advancing power of computational hardware and software. CFD si...

  6. An Interactive Graphical Modeling Game for Teaching Musical Concepts.

    ERIC Educational Resources Information Center

    Lamb, Martin

    1982-01-01

    Describes an interactive computer game in which players compose music at a computer screen. They experiment with pitch and melodic shape and the effects of transposition, augmentation, diminution, retrograde, and inversion. The user interface is simple enough for children to use and powerful enough for composers to work with. (EAO)

  7. Profiling an application for power consumption during execution on a compute node

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-09-17

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  8. Institute for Sustained Performance, Energy, and Resilience (SuPER)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jagode, Heike; Bosilca, George; Danalis, Anthony

    The University of Tennessee (UTK) and University of Texas at El Paso (UTEP) partnership supported the three main thrusts of the SUPER project---performance, energy, and resilience. The UTK-UTEP effort thus helped advance the main goal of SUPER, which was to ensure that DOE's computational scientists can successfully exploit the emerging generation of high performance computing (HPC) systems. This goal is being met by providing application scientists with strategies and tools to productively maximize performance, conserve energy, and attain resilience. The primary vehicle through which UTK provided performance measurement support to SUPER and the larger HPC community is the Performance Applicationmore » Programming Interface (PAPI). PAPI is an ongoing project that provides a consistent interface and methodology for collecting hardware performance information from various hardware and software components, including most major CPUs, GPUs and accelerators, interconnects, I/O systems, and power interfaces, as well as virtual cloud environments. The PAPI software is widely used for performance modeling of scientific and engineering applications---for example, the HOMME (High Order Methods Modeling Environment) climate code, and the GAMESS and NWChem computational chemistry codes---on DOE supercomputers. PAPI is widely deployed as middleware for use by higher-level profiling, tracing, and sampling tools (e.g., CrayPat, HPCToolkit, Scalasca, Score-P, TAU, Vampir, PerfExpert), making it the de facto standard for hardware counter analysis. PAPI has established itself as fundamental software infrastructure in every application domain (spanning academia, government, and industry), where improving performance can be mission critical. Ultimately, as more application scientists migrate their applications to HPC platforms, they will benefit from the extended capabilities this grant brought to PAPI to analyze and optimize performance in these environments, whether they use PAPI directly, or via third-party performance tools. Capabilities added to PAPI through this grant include support for new architectures such as the lastest GPU and Xeon Phi accelerators, and advanced power measurement and management features. Another important topic for the UTK team was providing support for a rich ecosystem of different fault management strategies in the context of parallel computing. Our long term efforts have been oriented toward proposing flexible strategies and providing building boxes that application developers can use to build the most efficient fault management technique for their application. These efforts span across the entire software spectrum, from theoretical models of existing strategies to easily assess their performance, to algorithmic modifications to take advantage of specific mathematical properties for data redundancy and to extensions to widely used programming paradigms to empower the application developers to deal with all types of faults. We have also continued our tight collaborations with users to help them adopt these technologies to ensure their application always deliver meaningful scientific data. Large supercomputer systems are becoming more and more power and energy constrained, and future systems and applications running on them will need to be optimized to run under power caps and/or minimize energy consumption. The UTEP team contributed to the SUPER energy thrust by developing power modeling methodologies and investigating power management strategies. Scalability modeling results showed that some applications can scale better with respect to an increasing power budget than with respect to only the number of processors. Power management, in particular shifting power to processors on the critical path of an application execution, can reduce perturbation due to system noise and other sources of runtime variability, which are growing problems on large-scale power-constrained computer systems.« less

  9. A computer model of solar panel-plasma interactions

    NASA Technical Reports Server (NTRS)

    Cooke, D. L.; Freeman, J. W.

    1980-01-01

    High power solar arrays for satellite power systems are presently being planned with dimensions of kilometers, and with tens of kilovolts distributed over their surface. Such systems face many plasma interaction problems, such as power leakage to the plasma, particle focusing, and anomalous arcing. These effects cannot be adequately modeled without detailed knowledge of the plasma sheath structure and space charge effects. Laboratory studies of 1 by 10 meter solar array in a simulated low Earth orbit plasma are discussed. The plasma screening process is discussed, program theory is outlined, and a series of calibration models is presented. These models are designed to demonstrate that PANEL is capable of accurate self consistant space charge calculations. Such models include PANEL predictions for the Child-Langmuir diode problem.

  10. Cinema Fire Modelling by FDS

    NASA Astrophysics Data System (ADS)

    Glasa, J.; Valasek, L.; Weisenpacher, P.; Halada, L.

    2013-02-01

    Recent advances in computer fluid dynamics (CFD) and rapid increase of computational power of current computers have led to the development of CFD models capable to describe fire in complex geometries incorporating a wide variety of physical phenomena related to fire. In this paper, we demonstrate the use of Fire Dynamics Simulator (FDS) for cinema fire modelling. FDS is an advanced CFD system intended for simulation of the fire and smoke spread and prediction of thermal flows, toxic substances concentrations and other relevant parameters of fire. The course of fire in a cinema hall is described focusing on related safety risks. Fire properties of flammable materials used in the simulation were determined by laboratory measurements and validated by fire tests and computer simulations

  11. Modeling of a Sequential Two-Stage Combustor

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.; Liu, N.-S.; Gallagher, J. R.; Ryder, R. C.; Brankovic, A.; Hendricks, J. A.

    2005-01-01

    A sequential two-stage, natural gas fueled power generation combustion system is modeled to examine the fundamental aerodynamic and combustion characteristics of the system. The modeling methodology includes CAD-based geometry definition, and combustion computational fluid dynamics analysis. Graphical analysis is used to examine the complex vortical patterns in each component, identifying sources of pressure loss. The simulations demonstrate the importance of including the rotating high-pressure turbine blades in the computation, as this results in direct computation of combustion within the first turbine stage, and accurate simulation of the flow in the second combustion stage. The direct computation of hot-streaks through the rotating high-pressure turbine stage leads to improved understanding of the aerodynamic relationships between the primary and secondary combustors and the turbomachinery.

  12. Model falsifiability and climate slow modes

    NASA Astrophysics Data System (ADS)

    Essex, Christopher; Tsonis, Anastasios A.

    2018-07-01

    The most advanced climate models are actually modified meteorological models attempting to capture climate in meteorological terms. This seems a straightforward matter of raw computing power applied to large enough sources of current data. Some believe that models have succeeded in capturing climate in this manner. But have they? This paper outlines difficulties with this picture that derive from the finite representation of our computers, and the fundamental unavailability of future data instead. It suggests that alternative windows onto the multi-decadal timescales are necessary in order to overcome the issues raised for practical problems of prediction.

  13. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  14. Eco-Evo PVAs: Incorporating Eco-Evolutionary Processes into Population Viability Models

    EPA Science Inventory

    We synthesize how advances in computational methods and population genomics can be combined within an Ecological-Evolutionary (Eco-Evo) PVA model. Eco-Evo PVA models are powerful new tools for understanding the influence of evolutionary processes on plant and animal population pe...

  15. A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...

  16. Cloud Computing for radiologists.

    PubMed

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  17. Cloud Computing for radiologists

    PubMed Central

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future. PMID:23599560

  18. The use of imprecise processing to improve accuracy in weather & climate prediction

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.

  19. Work and power analysis of the golf swing.

    PubMed

    Nesbit, Steven M; Serrano, Monika

    2005-12-01

    A work and power (energy) analysis of the golf swing is presented as a method for evaluating the mechanics of the golf swing. Two computer models were used to estimate the energy production, transfers, and conversions within the body and the golf club by employing standard methods of mechanics to calculate work of forces and torques, kinetic energies, strain energies, and power during the golf swing. A detailed model of the golf club determined the energy transfers and conversions within the club during the downswing. A full-body computer model of the golfer determined the internal work produced at the body joints during the downswing. Four diverse amateur subjects were analyzed and compared using these two models. The energy approach yielded new information on swing mechanics, determined the force and torque components that accelerated the club, illustrated which segments of the body produced work, determined the timing of internal work generation, measured swing efficiencies, calculated shaft energy storage and release, and proved that forces and range of motion were equally important in developing club head velocity. A more comprehensive description of the downswing emerged from information derived from an energy based analysis. Key PointsFull-Body Model of the golf swing.Energy analysis of the golf swing.Work of the body joints dDuring the golf swing.Comparisons of subject work and power characteristics.

  20. Work and Power Analysis of the Golf Swing

    PubMed Central

    Nesbit, Steven M.; Serrano, Monika

    2005-01-01

    A work and power (energy) analysis of the golf swing is presented as a method for evaluating the mechanics of the golf swing. Two computer models were used to estimate the energy production, transfers, and conversions within the body and the golf club by employing standard methods of mechanics to calculate work of forces and torques, kinetic energies, strain energies, and power during the golf swing. A detailed model of the golf club determined the energy transfers and conversions within the club during the downswing. A full-body computer model of the golfer determined the internal work produced at the body joints during the downswing. Four diverse amateur subjects were analyzed and compared using these two models. The energy approach yielded new information on swing mechanics, determined the force and torque components that accelerated the club, illustrated which segments of the body produced work, determined the timing of internal work generation, measured swing efficiencies, calculated shaft energy storage and release, and proved that forces and range of motion were equally important in developing club head velocity. A more comprehensive description of the downswing emerged from information derived from an energy based analysis. Key Points Full-Body Model of the golf swing. Energy analysis of the golf swing. Work of the body joints dDuring the golf swing. Comparisons of subject work and power characteristics. PMID:24627666

  1. Note: The full function test explosive generator.

    PubMed

    Reisman, D B; Javedani, J B; Griffith, L V; Ellsworth, G F; Kuklo, R M; Goerz, D A; White, A D; Tallerico, L J; Gidding, D A; Murphy, M J; Chase, J B

    2010-03-01

    We have conducted three tests of a new pulsed power device called the full function test. These tests represented the culmination of an effort to establish a high energy pulsed power capability based on high explosive pulsed power (HEPP) technology. This involved an extensive computational modeling, engineering, fabrication, and fielding effort. The experiments were highly successful and a new U.S. record for magnetic energy was obtained.

  2. Prediction of the effects of propeller operation on the static longitudinal stability of single-engine tractor monoplanes with flaps retracted

    NASA Technical Reports Server (NTRS)

    Weil, Joseph; Sleeman, William C , Jr

    1949-01-01

    The effects of propeller operation on the static longitudinal stability of single-engine tractor monoplanes are analyzed, and a simple method is presented for computing power-on pitching-moment curves for flap-retracted flight conditions. The methods evolved are based on the results of powered-model wind-tunnel investigations of 28 model configurations. Correlation curves are presented from which the effects of power on the downwash over the tail and the stabilizer effectiveness can be rapidly predicted. The procedures developed enable prediction of power-on longitudinal stability characteristics that are generally in very good agreement with experiment.

  3. Quantum Walk Schemes for Universal Quantum Computation

    NASA Astrophysics Data System (ADS)

    Underwood, Michael S.

    Random walks are a powerful tool for the efficient implementation of algorithms in classical computation. Their quantum-mechanical analogues, called quantum walks, hold similar promise. Quantum walks provide a model of quantum computation that has recently been shown to be equivalent in power to the standard circuit model. As in the classical case, quantum walks take place on graphs and can undergo discrete or continuous evolution, though quantum evolution is unitary and therefore deterministic until a measurement is made. This thesis considers the usefulness of continuous-time quantum walks to quantum computation from the perspectives of both their fundamental power under various formulations, and their applicability in practical experiments. In one extant scheme, logical gates are effected by scattering processes. The results of an exhaustive search for single-qubit operations in this model are presented. It is shown that the number of distinct operations increases exponentially with the number of vertices in the scattering graph. A catalogue of all graphs on up to nine vertices that implement single-qubit unitaries at a specific set of momenta is included in an appendix. I develop a novel scheme for universal quantum computation called the discontinuous quantum walk, in which a continuous-time quantum walker takes discrete steps of evolution via perfect quantum state transfer through small 'widget' graphs. The discontinuous quantum-walk scheme requires an exponentially sized graph, as do prior discrete and continuous schemes. To eliminate the inefficient vertex resource requirement, a computation scheme based on multiple discontinuous walkers is presented. In this model, n interacting walkers inhabiting a graph with 2n vertices can implement an arbitrary quantum computation on an input of length n, an exponential savings over previous universal quantum walk schemes. This is the first quantum walk scheme that allows for the application of quantum error correction. The many-particle quantum walk can be viewed as a single quantum walk undergoing perfect state transfer on a larger weighted graph, obtained via equitable partitioning. I extend this formalism to non-simple graphs. Examples of the application of equitable partitioning to the analysis of quantum walks and many-particle quantum systems are discussed.

  4. Approaches in highly parameterized inversion-PESTCommander, a graphical user interface for file and run management across networks

    USGS Publications Warehouse

    Karanovic, Marinko; Muffels, Christopher T.; Tonkin, Matthew J.; Hunt, Randall J.

    2012-01-01

    Models of environmental systems have become increasingly complex, incorporating increasingly large numbers of parameters in an effort to represent physical processes on a scale approaching that at which they occur in nature. Consequently, the inverse problem of parameter estimation (specifically, model calibration) and subsequent uncertainty analysis have become increasingly computation-intensive endeavors. Fortunately, advances in computing have made computational power equivalent to that of dozens to hundreds of desktop computers accessible through a variety of alternate means: modelers have various possibilities, ranging from traditional Local Area Networks (LANs) to cloud computing. Commonly used parameter estimation software is well suited to take advantage of the availability of such increased computing power. Unfortunately, logistical issues become increasingly important as an increasing number and variety of computers are brought to bear on the inverse problem. To facilitate efficient access to disparate computer resources, the PESTCommander program documented herein has been developed to provide a Graphical User Interface (GUI) that facilitates the management of model files ("file management") and remote launching and termination of "slave" computers across a distributed network of computers ("run management"). In version 1.0 described here, PESTCommander can access and ascertain resources across traditional Windows LANs: however, the architecture of PESTCommander has been developed with the intent that future releases will be able to access computing resources (1) via trusted domains established in Wide Area Networks (WANs) in multiple remote locations and (2) via heterogeneous networks of Windows- and Unix-based operating systems. The design of PESTCommander also makes it suitable for extension to other computational resources, such as those that are available via cloud computing. Version 1.0 of PESTCommander was developed primarily to work with the parameter estimation software PEST; the discussion presented in this report focuses on the use of the PESTCommander together with Parallel PEST. However, PESTCommander can be used with a wide variety of programs and models that require management, distribution, and cleanup of files before or after model execution. In addition to its use with the Parallel PEST program suite, discussion is also included in this report regarding the use of PESTCommander with the Global Run Manager GENIE, which was developed simultaneously with PESTCommander.

  5. Computing with motile bio-agents

    NASA Astrophysics Data System (ADS)

    Nicolau, Dan V., Jr.; Burrage, Kevin; Nicolau, Dan V.

    2007-12-01

    We describe a model of computation of the parallel type, which we call 'computing with bio-agents', based on the concept that motions of biological objects such as bacteria or protein molecular motors in confined spaces can be regarded as computations. We begin with the observation that the geometric nature of the physical structures in which model biological objects move modulates the motions of the latter. Consequently, by changing the geometry, one can control the characteristic trajectories of the objects; on the basis of this, we argue that such systems are computing devices. We investigate the computing power of mobile bio-agent systems and show that they are computationally universal in the sense that they are capable of computing any Boolean function in parallel. We argue also that using appropriate conditions, bio-agent systems can solve NP-complete problems in probabilistic polynomial time.

  6. Heterogeneous concurrent computing with exportable services

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy

    1995-01-01

    Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent.

  7. Computational modelling of oxygenation processes in enzymes and biomimetic model complexes.

    PubMed

    de Visser, Sam P; Quesne, Matthew G; Martin, Bodo; Comba, Peter; Ryde, Ulf

    2014-01-11

    With computational resources becoming more efficient and more powerful and at the same time cheaper, computational methods have become more and more popular for studies on biochemical and biomimetic systems. Although large efforts from the scientific community have gone into exploring the possibilities of computational methods for studies on large biochemical systems, such studies are not without pitfalls and often cannot be routinely done but require expert execution. In this review we summarize and highlight advances in computational methodology and its application to enzymatic and biomimetic model complexes. In particular, we emphasize on topical and state-of-the-art methodologies that are able to either reproduce experimental findings, e.g., spectroscopic parameters and rate constants, accurately or make predictions of short-lived intermediates and fast reaction processes in nature. Moreover, we give examples of processes where certain computational methods dramatically fail.

  8. Analysis about modeling MEC7000 excitation system of nuclear power unit

    NASA Astrophysics Data System (ADS)

    Liu, Guangshi; Sun, Zhiyuan; Dou, Qian; Liu, Mosi; Zhang, Yihui; Wang, Xiaoming

    2018-02-01

    Aiming at the importance of accurate modeling excitation system in stability calculation of nuclear power plant inland and lack of research in modeling MEC7000 excitation system,this paper summarize a general method to modeling and simulate MEC7000 excitation system. Among this method also solve the key issues of computing method of IO interface parameter and the conversion process of excitation system measured model to BPA simulation model. At last complete the simulation modeling of MEC7000 excitation system first time in domestic. By used No-load small disturbance check, demonstrates that the proposed model and algorithm is corrective and efficient.

  9. A large-eddy simulation based power estimation capability for wind farms over complex terrain

    NASA Astrophysics Data System (ADS)

    Senocak, I.; Sandusky, M.; Deleon, R.

    2017-12-01

    There has been an increasing interest in predicting wind fields over complex terrain at the micro-scale for resource assessment, turbine siting, and power forecasting. These capabilities are made possible by advancements in computational speed from a new generation of computing hardware, numerical methods and physics modelling. The micro-scale wind prediction model presented in this work is based on the large-eddy simulation paradigm with surface-stress parameterization. The complex terrain is represented using an immersed-boundary method that takes into account the parameterization of the surface stresses. Governing equations of incompressible fluid flow are solved using a projection method with second-order accurate schemes in space and time. We use actuator disk models with rotation to simulate the influence of turbines on the wind field. Data regarding power production from individual turbines are mostly restricted because of proprietary nature of the wind energy business. Most studies report percentage drop of power relative to power from the first row. There have been different approaches to predict power production. Some studies simply report available wind power in the upstream, some studies estimate power production using power curves available from turbine manufacturers, and some studies estimate power as torque multiplied by rotational speed. In the present work, we propose a black-box approach that considers a control volume around a turbine and estimate the power extracted from the turbine based on the conservation of energy principle. We applied our wind power prediction capability to wind farms over flat terrain such as the wind farm over Mower County, Minnesota and the Horns Rev offshore wind farm in Denmark. The results from these simulations are in good agreement with published data. We also estimate power production from a hypothetical wind farm in complex terrain region and identify potential zones suitable for wind power production.

  10. Computer simulations of optimum boost and buck-boost converters

    NASA Technical Reports Server (NTRS)

    Rahman, S.

    1982-01-01

    The development of mathematicl models suitable for minimum weight boost and buck-boost converter designs are presented. The facility of an augumented Lagrangian (ALAG) multiplier-based nonlinear programming technique is demonstrated for minimum weight design optimizations of boost and buck-boost power converters. ALAG-based computer simulation results for those two minimum weight designs are discussed. Certain important features of ALAG are presented in the framework of a comprehensive design example for boost and buck-boost power converter design optimization. The study provides refreshing design insight of power converters and presents such information as weight annd loss profiles of various semiconductor components and magnetics as a function of the switching frequency.

  11. Application of multivariate autoregressive spectrum estimation to ULF waves

    NASA Technical Reports Server (NTRS)

    Ioannidis, G. A.

    1975-01-01

    The estimation of the power spectrum of a time series by fitting a finite autoregressive model to the data has recently found widespread application in the physical sciences. The extension of this method to the analysis of vector time series is presented here through its application to ULF waves observed in the magnetosphere by the ATS 6 synchronous satellite. Autoregressive spectral estimates of the power and cross-power spectra of these waves are computed with computer programs developed by the author and are compared with the corresponding Blackman-Tukey spectral estimates. The resulting spectral density matrices are then analyzed to determine the direction of propagation and polarization of the observed waves.

  12. The generative power of weighted one-sided and regular sticker systems

    NASA Astrophysics Data System (ADS)

    Siang, Gan Yee; Heng, Fong Wan; Sarmin, Nor Haniza; Turaev, Sherzod

    2014-06-01

    Sticker systems were introduced in 1998 as one of the DNA computing models by using the recombination behavior of DNA molecules. The Watson-Crick complementary principle of DNA molecules is abstractly used in the sticker systems to perform the computation of sticker systems. In this paper, the generative power of weighted one-sided sticker systems and weighted regular sticker systems are investigated. Moreover, the relationship of the families of languages generated by these two variants of sticker systems to the Chomsky hierarchy is also presented.

  13. Optical computing research

    NASA Astrophysics Data System (ADS)

    Goodman, Joseph W.

    1987-10-01

    Work Accomplished: OPTICAL INTERCONNECTIONS - the powerful interconnect abilities of optical beams have led much optimism about the possible roles for optics in solving interconnect problems at various levels of computer architecture. Examined were the powerful requirements of optical interconnects at the gate-to-gate and chip-to-chip levels. OPTICAL NEUTRAL NETWORKS - basic studies of the convergence properties on the Holfield model, based on mathematical approach - graph theory. OPTICS AND ARTIFICIAL INTELLIGENCE - review the field of optical processing and artificial intelligence, with the aim of finding areas that might be particularly attractive for future investigation(s).

  14. Bootstrapping in a language of thought: a formal model of numerical concept learning.

    PubMed

    Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D

    2012-05-01

    In acquiring number words, children exhibit a qualitative leap in which they transition from understanding a few number words, to possessing a rich system of interrelated numerical concepts. We present a computational framework for understanding this inductive leap as the consequence of statistical inference over a sufficiently powerful representational system. We provide an implemented model that is powerful enough to learn number word meanings and other related conceptual systems from naturalistic data. The model shows that bootstrapping can be made computationally and philosophically well-founded as a theory of number learning. Our approach demonstrates how learners may combine core cognitive operations to build sophisticated representations during the course of development, and how this process explains observed developmental patterns in number word learning. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Electromagnetic Modeling of Human Body Using High Performance Computing

    NASA Astrophysics Data System (ADS)

    Ng, Cho-Kuen; Beall, Mark; Ge, Lixin; Kim, Sanghoek; Klaas, Ottmar; Poon, Ada

    Realistic simulation of electromagnetic wave propagation in the actual human body can expedite the investigation of the phenomenon of harvesting implanted devices using wireless powering coupled from external sources. The parallel electromagnetics code suite ACE3P developed at SLAC National Accelerator Laboratory is based on the finite element method for high fidelity accelerator simulation, which can be enhanced to model electromagnetic wave propagation in the human body. Starting with a CAD model of a human phantom that is characterized by a number of tissues, a finite element mesh representing the complex geometries of the individual tissues is built for simulation. Employing an optimal power source with a specific pattern of field distribution, the propagation and focusing of electromagnetic waves in the phantom has been demonstrated. Substantial speedup of the simulation is achieved by using multiple compute cores on supercomputers.

  16. Ocean power technology design optimization

    DOE PAGES

    van Rij, Jennifer; Yu, Yi -Hsiang; Edwards, Kathleen; ...

    2017-07-18

    For this study, the National Renewable Energy Laboratory and Ocean Power Technologies (OPT) conducted a collaborative code validation and design optimization study for OPT's PowerBuoy wave energy converter (WEC). NREL utilized WEC-Sim, an open-source WEC simulator, to compare four design variations of OPT's PowerBuoy. As an input to the WEC-Sim models, viscous drag coefficients for the PowerBuoy floats were first evaluated using computational fluid dynamics. The resulting WEC-Sim PowerBuoy models were then validated with experimental power output and fatigue load data provided by OPT. The validated WEC-Sim models were then used to simulate the power performance and loads for operationalmore » conditions, extreme conditions, and directional waves, for each of the four PowerBuoy design variations, assuming the wave environment of Humboldt Bay, California. And finally, ratios of power-to-weight, power-to-fatigue-load, power-to-maximum-extreme-load, power-to-water-plane-area, and power-to-wetted-surface-area were used to make a final comparison of the potential PowerBuoy WEC designs. Lastly, the design comparison methodologies developed and presented in this study are applicable to other WEC devices and may be useful as a framework for future WEC design development projects.« less

  17. Ocean power technology design optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Rij, Jennifer; Yu, Yi -Hsiang; Edwards, Kathleen

    For this study, the National Renewable Energy Laboratory and Ocean Power Technologies (OPT) conducted a collaborative code validation and design optimization study for OPT's PowerBuoy wave energy converter (WEC). NREL utilized WEC-Sim, an open-source WEC simulator, to compare four design variations of OPT's PowerBuoy. As an input to the WEC-Sim models, viscous drag coefficients for the PowerBuoy floats were first evaluated using computational fluid dynamics. The resulting WEC-Sim PowerBuoy models were then validated with experimental power output and fatigue load data provided by OPT. The validated WEC-Sim models were then used to simulate the power performance and loads for operationalmore » conditions, extreme conditions, and directional waves, for each of the four PowerBuoy design variations, assuming the wave environment of Humboldt Bay, California. And finally, ratios of power-to-weight, power-to-fatigue-load, power-to-maximum-extreme-load, power-to-water-plane-area, and power-to-wetted-surface-area were used to make a final comparison of the potential PowerBuoy WEC designs. Lastly, the design comparison methodologies developed and presented in this study are applicable to other WEC devices and may be useful as a framework for future WEC design development projects.« less

  18. Profiling an application for power consumption during execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-08-21

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  19. Underwater striling engine design with modified one-dimensional model

    NASA Astrophysics Data System (ADS)

    Li, Daijin; Qin, Kan; Luo, Kai

    2015-09-01

    Stirling engines are regarded as an efficient and promising power system for underwater devices. Currently, many researches on one-dimensional model is used to evaluate thermodynamic performance of Stirling engine, but in which there are still some aspects which cannot be modeled with proper mathematical models such as mechanical loss or auxiliary power. In this paper, a four-cylinder double-acting Stirling engine for Unmanned Underwater Vehicles (UUVs) is discussed. And a one-dimensional model incorporated with empirical equations of mechanical loss and auxiliary power obtained from experiments is derived while referring to the Stirling engine computer model of National Aeronautics and Space Administration (NASA). The P-40 Stirling engine with sufficient testing results from NASA is utilized to validate the accuracy of this one-dimensional model. It shows that the maximum error of output power of theoretical analysis results is less than 18% over testing results, and the maximum error of input power is no more than 9%. Finally, a Stirling engine for UUVs is designed with Schmidt analysis method and the modified one-dimensional model, and the results indicate this designed engine is capable of showing desired output power.

  20. Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions.

    PubMed

    Williams, Daniel R; Tang, Yinshan

    2013-05-07

    Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft's cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.

  1. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  2. Computing Linear Mathematical Models Of Aircraft

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.

    1991-01-01

    Derivation and Definition of Linear Aircraft Model (LINEAR) computer program provides user with powerful, and flexible, standard, documented, and verified software tool for linearization of mathematical models of aerodynamics of aircraft. Intended for use in software tool to drive linear analysis of stability and design of control laws for aircraft. Capable of both extracting such linearized engine effects as net thrust, torque, and gyroscopic effects, and including these effects in linear model of system. Designed to provide easy selection of state, control, and observation variables used in particular model. Also provides flexibility of allowing alternate formulations of both state and observation equations. Written in FORTRAN.

  3. Adaptive control method for core power control in TRIGA Mark II reactor

    NASA Astrophysics Data System (ADS)

    Sabri Minhat, Mohd; Selamat, Hazlina; Subha, Nurul Adilla Mohd

    2018-01-01

    The 1MWth Reactor TRIGA PUSPATI (RTP) Mark II type has undergone more than 35 years of operation. The existing core power control uses feedback control algorithm (FCA). It is challenging to keep the core power stable at the desired value within acceptable error bands to meet the safety demand of RTP due to the sensitivity of nuclear research reactor operation. Currently, the system is not satisfied with power tracking performance and can be improved. Therefore, a new design core power control is very important to improve the current performance in tracking and regulate reactor power by control the movement of control rods. In this paper, the adaptive controller and focus on Model Reference Adaptive Control (MRAC) and Self-Tuning Control (STC) were applied to the control of the core power. The model for core power control was based on mathematical models of the reactor core, adaptive controller model, and control rods selection programming. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The adaptive control model was presented using Lyapunov method to ensure stable close loop system and STC Generalised Minimum Variance (GMV) Controller was not necessary to know the exact plant transfer function in designing the core power control. The performance between proposed adaptive control and FCA will be compared via computer simulation and analysed the simulation results manifest the effectiveness and the good performance of the proposed control method for core power control.

  4. Battery-Charge-State Model

    NASA Technical Reports Server (NTRS)

    Vivian, H. C.

    1985-01-01

    Charge-state model for lead/acid batteries proposed as part of effort to make equivalent of fuel gage for battery-powered vehicles. Models based on equations that approximate observable characteristics of battery electrochemistry. Uses linear equations, easier to simulate on computer, and gives smooth transitions between charge, discharge, and recuperation.

  5. Evaluation of a stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Hydrologic models are essential tools for environmental assessment of agricultural non-point source pollution. The automatic calibration of hydrologic models, though efficient, demands significant computational power, which can limit its application. The study objective was to investigate a cost e...

  6. Water and Power Systems Co-optimization under a High Performance Computing Framework

    NASA Astrophysics Data System (ADS)

    Xuan, Y.; Arumugam, S.; DeCarolis, J.; Mahinthakumar, K.

    2016-12-01

    Water and energy systems optimizations are traditionally being treated as two separate processes, despite their intrinsic interconnections (e.g., water is used for hydropower generation, and thermoelectric cooling requires a large amount of water withdrawal). Given the challenges of urbanization, technology uncertainty and resource constraints, and the imminent threat of climate change, a cyberinfrastructure is needed to facilitate and expedite research into the complex management of these two systems. To address these issues, we developed a High Performance Computing (HPC) framework for stochastic co-optimization of water and energy resources to inform water allocation and electricity demand. The project aims to improve conjunctive management of water and power systems under climate change by incorporating improved ensemble forecast models of streamflow and power demand. First, by downscaling and spatio-temporally disaggregating multimodel climate forecasts from General Circulation Models (GCMs), temperature and precipitation forecasts are obtained and input into multi-reservoir and power systems models. Extended from Optimus (Optimization Methods for Universal Simulators), the framework drives the multi-reservoir model and power system model, Temoa (Tools for Energy Model Optimization and Analysis), and uses Particle Swarm Optimization (PSO) algorithm to solve high dimensional stochastic problems. The utility of climate forecasts on the cost of water and power systems operations is assessed and quantified based on different forecast scenarios (i.e., no-forecast, multimodel forecast and perfect forecast). Analysis of risk management actions and renewable energy deployments will be investigated for the Catawba River basin, an area with adequate hydroclimate predicting skill and a critical basin with 11 reservoirs that supplies water and generates power for both North and South Carolina. Further research using this scalable decision supporting framework will provide understanding and elucidate the intricate and interdependent relationship between water and energy systems and enhance the security of these two critical public infrastructures.

  7. GPU-based High-Performance Computing for Radiation Therapy

    PubMed Central

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B.

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. Graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past a few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of studies have been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this article, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. PMID:24486639

  8. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C C

    The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less

  9. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C. C.

    The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domainmore » CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.« less

  10. Brayton Power Conversion System Parametric Design Modelling for Nuclear Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Ashe, Thomas L.; Otting, William D.

    1993-01-01

    The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.

  11. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist

    PubMed Central

    Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J.; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Lötstedt, Per; Petzold, Linda R.

    2016-01-01

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity. PMID:27930676

  12. Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist

    DOE PAGES

    Drawert, Brian; Hellander, Andreas; Bales, Ben; ...

    2016-12-08

    We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources andmore » exchange models via a public model repository. We also demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.« less

  13. Pressure Loss Predictions of the Reactor Simulator Subsystem at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Reid, Terry V.

    2016-01-01

    Testing of the Fission Power System (FPS) Technology Demonstration Unit (TDU) is being conducted at NASA Glenn Research Center. The TDU consists of three subsystems: the reactor simulator (RxSim), the Stirling Power Conversion Unit (PCU), and the heat exchanger manifold (HXM). An annular linear induction pump (ALIP) is used to drive the working fluid. A preliminary version of the TDU system (which excludes the PCU for now) is referred to as the "RxSim subsystem" and was used to conduct flow tests in Vacuum Facility 6 (VF 6). In parallel, a computational model of the RxSim subsystem was created based on the computer-aided-design (CAD) model and was used to predict loop pressure losses over a range of mass flows. This was done to assess the ability of the pump to meet the design intent mass flow demand. Measured data indicates that the pump can produce 2.333 kg/sec of flow, which is enough to supply the RxSim subsystem with a nominal flow of 1.75 kg/sec. Computational predictions indicated that the pump could provide 2.157 kg/sec (using the Spalart-Allmaras (S?A) turbulence model) and 2.223 kg/sec (using the k- turbulence model). The computational error of the predictions for the available mass flow is ?0.176 kg/sec (with the S-A turbulence model) and -0.110 kg/sec (with the k- turbulence model) when compared to measured data.

  14. Limits on efficient computation in the physical world

    NASA Astrophysics Data System (ADS)

    Aaronson, Scott Joel

    More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure/Shor separator"---a criterion that separates the already-verified quantum states from those that appear in Shor's factoring algorithm. I argue that such a separator should be based on a complexity classification of quantum states, and go on to create such a classification. Next I ask what happens to the quantum computing model if we take into account that the speed of light is finite---and in particular, whether Grover's algorithm still yields a quadratic speedup for searching a database. Refuting a claim by Benioff, I show that the surprising answer is yes. Finally, I analyze hypothetical models of computation that go even beyond quantum computing. I show that many such models would be as powerful as the complexity class PP, and use this fact to give a simple, quantum computing based proof that PP is closed under intersection. On the other hand, I also present one model---wherein we could sample the entire history of a hidden variable---that appears to be more powerful than standard quantum computing, but only slightly so.

  15. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea

    2015-09-01

    The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less

  16. Transient Approximation of SAFE-100 Heat Pipe Operation

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Reid, Robert S.

    2005-01-01

    Engineers at Los Alamos National Laboratory (LANL) have designed several heat pipe cooled reactor concepts, ranging in power from 15 kWt to 800 kWt, for both surface power systems and nuclear electric propulsion systems. The Safe, Affordable Fission Engine (SAFE) is now being developed in a collaborative effort between LANL and NASA Marshall Space Flight Center (NASA/MSFC). NASA is responsible for fabrication and testing of non-nuclear, electrically heated modules in the Early Flight Fission Test Facility (EFF-TF) at MSFC. In-core heat pipes must be properly thawed as the reactor power starts. Computational models have been developed to assess the expected operation of a specific heat pipe design during start-up, steady state operation, and shutdown. While computationally intensive codes provide complete, detailed analyses of heat pipe thaw, a relatively simple. concise routine can also be applied to approximate the response of a heat pipe to changes in the evaporator heat transfer rate during start-up and power transients (e.g., modification of reactor power level) with reasonably accurate results. This paper describes a simplified model of heat pipe start-up that extends previous work and compares the results to experimental measurements for a SAFE-100 type heat pipe design.

  17. Prospects for Finite-Difference Time-Domain (FDTD) Computational Electrodynamics

    NASA Astrophysics Data System (ADS)

    Taflove, Allen

    2002-08-01

    FDTD is the most powerful numerical solution of Maxwell's equations for structures having internal details. Relative to moment-method and finite-element techniques, FDTD can accurately model such problems with 100-times more field unknowns and with nonlinear and/or time-variable parameters. Hundreds of FDTD theory and applications papers are published each year. Currently, there are at least 18 commercial FDTD software packages for solving problems in: defense (especially vulnerability to electromagnetic pulse and high-power microwaves); design of antennas and microwave devices/circuits; electromagnetic compatibility; bioelectromagnetics (especially assessment of cellphone-generated RF absorption in human tissues); signal integrity in computer interconnects; and design of micro-photonic devices (especially photonic bandgap waveguides, microcavities; and lasers). This paper explores emerging prospects for FDTD computational electromagnetics brought about by continuing advances in computer capabilities and FDTD algorithms. We conclude that advances already in place point toward the usage by 2015 of ultralarge-scale (up to 1E11 field unknowns) FDTD electromagnetic wave models covering the frequency range from about 0.1 Hz to 1E17 Hz. We expect that this will yield significant benefits for our society in areas as diverse as computing, telecommunications, defense, and public health and safety.

  18. Current Grid Generation Strategies and Future Requirements in Hypersonic Vehicle Design, Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)

    1998-01-01

    Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.

  19. Margin and sensitivity methods for security analysis of electric power systems

    NASA Astrophysics Data System (ADS)

    Greene, Scott L.

    Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number of loadflow iterations required by each margin computation and provides sensitivity information at minimal additional cost. Estimates of the effect of simultaneous transfers on the transfer margins agree well with the exact computations for a network model derived from a portion of the U.S grid. The accuracy of the estimates over a useful range of conditions and the ease of obtaining the estimates suggest that the sensitivity computations will be of practical value.

  20. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-10

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.

  1. Computational fluid dynamics modeling of gas dispersion in multi impeller bioreactor.

    PubMed

    Ahmed, Syed Ubaid; Ranganathan, Panneerselvam; Pandey, Ashok; Sivaraman, Savithri

    2010-06-01

    In the present study, experiments have been carried out to identify various flow regimes in a dual Rushton turbines stirred bioreactor for different gas flow rates and impeller speeds. The hydrodynamic parameters like fractional gas hold-up, power consumption and mixing time have been measured. A two fluid model along with MUSIG model to handle polydispersed gas flow has been implemented to predict the various flow regimes and hydrodynamic parameters in the dual turbines stirred bioreactor. The computational model has been mapped on commercial solver ANSYS CFX. The flow regimes predicted by numerical simulations are validated with the experimental results. The present model has successfully captured the flow regimes as observed during experiments. The measured gross flow characteristics like fractional gas hold-up, and mixing time have been compared with numerical simulations. Also the effect of gas flow rate and impeller speed on gas hold-up and power consumption have been investigated. (c) 2009 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  2. Power and energy computational models for the design and simulation of hybrid-electric combat vehicles

    NASA Astrophysics Data System (ADS)

    Smith, Wilford; Nunez, Patrick

    2005-05-01

    This paper describes the work being performed under the RDECOM Power and Energy (P&E) program (formerly the Combat Hybrid Power System (CHPS) program) developing hybrid power system models and integrating them into larger simulations, such as OneSAF, that can be used to find duty cycles to feed designers of hybrid power systems. This paper also describes efforts underway to link the TARDEC P&E System Integration Lab (SIL) in San Jose CA to the TARDEC Ground Vehicle Simulation Lab (GVSL) in Warren, MI. This linkage is being performed to provide a methodology for generating detailed driver profiles for use in the development of vignettes and mission profiles for system design excursions.

  3. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  4. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE PAGES

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    2017-01-31

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  5. Integration of Rotor Aerodynamic Optimization with the Conceptual Design of a Large Civil Tiltrotor

    DTIC Science & Technology

    2010-01-01

    Rotor MCP Maximum Continuous Power MRP Maximum Rated Power (take-off power) NDARC NASA Design and Analysis of Rotorcraft OEI One Engine Inoperative...OGE Out of Ground Effect SFC Specific Fuel Consumption SNI Simultaneous Non-Interfering approach STOL Short Takeoff and Landing VTOL Vertical...that are assembled into a complete aircraft model. NDARC is designed for high computational efficiency. Performance is calculated with physics- based

  6. Control aspects of the Schuchuli Village stand-alone photovoltaic power system

    NASA Astrophysics Data System (ADS)

    Groumpos, P. P.; Culler, J. E.; Delombard, R.

    1984-11-01

    A photovoltaic power system in an Arizona Indian village was installed. The control subsystem of this photovoltaic power system was analyzed. The four major functions of the control subsystem are: (1) voltage regulation; (2) load management; (3) water pump control; and (4) system protection. The control subsystem functions flowcharts for the control subsystem operation, and a computer program that models the control subsystem are presented.

  7. Control aspects of the Schuchuli Village stand-alone photovoltaic power system

    NASA Technical Reports Server (NTRS)

    Groumpos, P. P.; Culler, J. E.; Delombard, R.

    1984-01-01

    A photovoltaic power system in an Arizona Indian village was installed. The control subsystem of this photovoltaic power system was analyzed. The four major functions of the control subsystem are: (1) voltage regulation; (2) load management; (3) water pump control; and (4) system protection. The control subsystem functions flowcharts for the control subsystem operation, and a computer program that models the control subsystem are presented.

  8. The AgESGUI geospatial simulation system for environmental model application and evaluation

    USDA-ARS?s Scientific Manuscript database

    Practical decision making in spatially-distributed environmental assessment and management is increasingly being based on environmental process-based models linked to geographical information systems (GIS). Furthermore, powerful computers and Internet-accessible assessment tools are providing much g...

  9. Green Day? An Old Mill City Leads a New Revolution in Massachusetts

    ERIC Educational Resources Information Center

    Brown, Robert A.

    2012-01-01

    The Northeast United States just experienced one of the region's worst natural disasters. Fortunately, because of the confluence of modern computing power and scientific computing methods, weather forecasting models predicted Sandy's very complicated trajectory and development with a precision that would not have been possible even a decade ago.…

  10. The World as Viewed by and with Unpaired Electrons

    PubMed Central

    Eaton, Sandra S.; Eaton, Gareth R.

    2012-01-01

    Recent advances in electron paramagnetic resonance (EPR) include capabilities for applications to areas as diverse as archeology, beer shelf life, biological structure, dosimetry, in vivo imaging, molecular magnets, and quantum computing. Enabling technologies include multifrequency continuous wave, pulsed, and rapid scan EPR. Interpretation is enhanced by increasingly powerful computational models. PMID:22975244

  11. Fostering Recursive Thinking in Combinatorics through the Use of Manipulatives and Computing Technology.

    ERIC Educational Resources Information Center

    Abramovich, Sergei; Pieper, Anne

    1996-01-01

    Describes the use of manipulatives for solving simple combinatorial problems which can lead to the discovery of recurrence relations for permutations and combinations. Numerical evidence and visual imagery generated by a computer spreadsheet through modeling these relations can enable students to experience the ease and power of combinatorial…

  12. Computational Exploration of a Protein Receptor Binding Space with Student Proposed Peptide Ligands

    ERIC Educational Resources Information Center

    King, Matthew D.; Phillips, Paul; Turner, Matthew W.; Katz, Michael; Lew, Sarah; Bradburn, Sarah; Andersen, Tim; McDougal, Owen M.

    2016-01-01

    Computational molecular docking is a fast and effective "in silico" method for the analysis of binding between a protein receptor model and a ligand. The visualization and manipulation of protein to ligand binding in three-dimensional space represents a powerful tool in the biochemistry curriculum to enhance student learning. The…

  13. A new model predictive control algorithm by reducing the computing time of cost function minimization for NPC inverter in three-phase power grids.

    PubMed

    Taheri, Asghar; Zhalebaghi, Mohammad Hadi

    2017-11-01

    This paper presents a new control strategy based on finite-control-set model-predictive control (FCS-MPC) for Neutral-point-clamped (NPC) three-level converters. Containing some advantages like fast dynamic response, easy inclusion of constraints and simple control loop, makes the FCS-MPC method attractive to use as a switching strategy for converters. However, the large amount of required calculations is a problem in the widespread of this method. In this way, to resolve this problem this paper presents a modified method that effectively reduces the computation load compare with conventional FCS-MPC method and at the same time does not affect on control performance. The proposed method can be used for exchanging power between electrical grid and DC resources by providing active and reactive power compensations. Experiments on three-level converter for three Power Factor Correction (PFC), inductive and capacitive compensation modes verify the good and comparable performance. The results have been simulated using MATLAB/SIMULINK software. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Computer Analysis of Spectrum Anomaly in 32-GHz Traveling-Wave Tube for Cassini Mission

    NASA Technical Reports Server (NTRS)

    Dayton, James A., Jr.; Wilson, Jeffrey D.; Kory, Carol L.

    1999-01-01

    Computer modeling of the 32-GHz traveling-wave tube (TWT) for the Cassini Mission was conducted to explain the anomaly observed in the spectrum analysis of one of the flight-model tubes. The analysis indicated that the effect, manifested as a weak signal in the neighborhood of 35 GHz, was an intermodulation product of the 32-GHz drive signal with a 66.9-GHz oscillation induced by coupling to the second harmonic'signal. The oscillation occurred only at low- radiofrequency (RF) drive power levels that are not expected during the Cassini Mission. The conclusion was that the anomaly was caused by a generic defect inadvertently incorporated in the geometric design of the slow-wave circuit and that it would not change as the TWT aged. The most probable effect of aging on tube performance would be a reduction in the electron beam current. The computer modeling indicated that although not likely to occur within the mission lifetime, a reduction in beam current would reduce or eliminate the anomaly but would do so at the cost of reduced RF output power.

  15. Modeling Cross-Situational Word-Referent Learning: Prior Questions

    ERIC Educational Resources Information Center

    Yu, Chen; Smith, Linda B.

    2012-01-01

    Both adults and young children possess powerful statistical computation capabilities--they can infer the referent of a word from highly ambiguous contexts involving many words and many referents by aggregating cross-situational statistical information across contexts. This ability has been explained by models of hypothesis testing and by models of…

  16. Chemistry-Climate Models of the Stratosphere

    NASA Technical Reports Server (NTRS)

    Austin, J.; Shindell, D.; Bruehl, C.; Dameris, M.; Manzini, E.; Nagashima, T.; Newman, P.; Pawson, S.; Pitari, G.; Rozanov, E.; hide

    2001-01-01

    Over the last decade, improved computer power has allowed three-dimensional models of the stratosphere to be developed that can be used to simulate polar ozone levels over long periods. This paper compares the meteorology between these models, and discusses the future of polar ozone levels over the next 50 years.

  17. Chromatin Computation

    PubMed Central

    Bryant, Barbara

    2012-01-01

    In living cells, DNA is packaged along with protein and RNA into chromatin. Chemical modifications to nucleotides and histone proteins are added, removed and recognized by multi-functional molecular complexes. Here I define a new computational model, in which chromatin modifications are information units that can be written onto a one-dimensional string of nucleosomes, analogous to the symbols written onto cells of a Turing machine tape, and chromatin-modifying complexes are modeled as read-write rules that operate on a finite set of adjacent nucleosomes. I illustrate the use of this “chromatin computer” to solve an instance of the Hamiltonian path problem. I prove that chromatin computers are computationally universal – and therefore more powerful than the logic circuits often used to model transcription factor control of gene expression. Features of biological chromatin provide a rich instruction set for efficient computation of nontrivial algorithms in biological time scales. Modeling chromatin as a computer shifts how we think about chromatin function, suggests new approaches to medical intervention, and lays the groundwork for the engineering of a new class of biological computing machines. PMID:22567109

  18. Uncertainty Modeling of Pollutant Transport in Atmosphere and Aquatic Route Using Soft Computing

    NASA Astrophysics Data System (ADS)

    Datta, D.

    2010-10-01

    Hazardous radionuclides are released as pollutants in the atmospheric and aquatic environment (ATAQE) during the normal operation of nuclear power plants. Atmospheric and aquatic dispersion models are routinely used to assess the impact of release of radionuclide from any nuclear facility or hazardous chemicals from any chemical plant on the ATAQE. Effect of the exposure from the hazardous nuclides or chemicals is measured in terms of risk. Uncertainty modeling is an integral part of the risk assessment. The paper focuses the uncertainty modeling of the pollutant transport in atmospheric and aquatic environment using soft computing. Soft computing is addressed due to the lack of information on the parameters that represent the corresponding models. Soft-computing in this domain basically addresses the usage of fuzzy set theory to explore the uncertainty of the model parameters and such type of uncertainty is called as epistemic uncertainty. Each uncertain input parameters of the model is described by a triangular membership function.

  19. Evaluation of Rankine cycle air conditioning system hardware by computer simulation

    NASA Technical Reports Server (NTRS)

    Healey, H. M.; Clark, D.

    1978-01-01

    A computer program for simulating the performance of a variety of solar powered Rankine cycle air conditioning system components (RCACS) has been developed. The computer program models actual equipment by developing performance maps from manufacturers data and is capable of simulating off-design operation of the RCACS components. The program designed to be a subroutine of the Marshall Space Flight Center (MSFC) Solar Energy System Analysis Computer Program 'SOLRAD', is a complete package suitable for use by an occasional computer user in developing performance maps of heating, ventilation and air conditioning components.

  20. An efficient computational method for characterizing the effects of random surface errors on the average power pattern of reflectors

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1983-01-01

    Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.

  1. Computer Language For Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.; Lucas, Stephen H.

    1991-01-01

    SOL is computer language geared to solution of design problems. Includes mathematical modeling and logical capabilities of computer language like FORTRAN; also includes additional power of nonlinear mathematical programming methods at language level. SOL compiler takes SOL-language statements and generates equivalent FORTRAN code and system calls. Provides syntactic and semantic checking for recovery from errors and provides detailed reports containing cross-references to show where each variable used. Implemented on VAX/VMS computer systems. Requires VAX FORTRAN compiler to produce executable program.

  2. [Birth and death process of computer viruses].

    PubMed

    Segawa, Katsunori; Nakano, Tatsuya; Nakata, Kotoko; Hayashi, Yuzuru

    2006-01-01

    The daily variations in the number of computer viruses found attaching to e-mails and the number of accesses to the home page of a national institute in Japan are examined. The power spectral densities (PSD) of the variation in the computer viruses show a time-correlation characteristic of Markov process, but the daily access number does not (identified as white noise). Like biological viruses, the variation in the computer viruses can be described by the birth-and-death model known as a Markov process.

  3. Using a hybrid neuron in physiologically inspired models of the basal ganglia.

    PubMed

    Thibeault, Corey M; Srinivasa, Narayan

    2013-01-01

    Our current understanding of the basal ganglia (BG) has facilitated the creation of computational models that have contributed novel theories, explored new functional anatomy and demonstrated results complementing physiological experiments. However, the utility of these models extends beyond these applications. Particularly in neuromorphic engineering, where the basal ganglia's role in computation is important for applications such as power efficient autonomous agents and model-based control strategies. The neurons used in existing computational models of the BG, however, are not amenable for many low-power hardware implementations. Motivated by a need for more hardware accessible networks, we replicate four published models of the BG, spanning single neuron and small networks, replacing the more computationally expensive neuron models with an Izhikevich hybrid neuron. This begins with a network modeling action-selection, where the basal activity levels and the ability to appropriately select the most salient input is reproduced. A Parkinson's disease model is then explored under normal conditions, Parkinsonian conditions and during subthalamic nucleus deep brain stimulation (DBS). The resulting network is capable of replicating the loss of thalamic relay capabilities in the Parkinsonian state and its return under DBS. This is also demonstrated using a network capable of action-selection. Finally, a study of correlation transfer under different patterns of Parkinsonian activity is presented. These networks successfully captured the significant results of the originals studies. This not only creates a foundation for neuromorphic hardware implementations but may also support the development of large-scale biophysical models. The former potentially providing a way of improving the efficacy of DBS and the latter allowing for the efficient simulation of larger more comprehensive networks.

  4. Modelling switching-time effects in high-frequency power conditioning networks

    NASA Technical Reports Server (NTRS)

    Owen, H. A.; Sloane, T. H.; Rimer, B. H.; Wilson, T. G.

    1979-01-01

    Power transistor networks which switch large currents in highly inductive environments are beginning to find application in the hundred kilohertz switching frequency range. Recent developments in the fabrication of metal-oxide-semiconductor field-effect transistors in the power device category have enhanced the movement toward higher switching frequencies. Models for switching devices and of the circuits in which they are imbedded are required to properly characterize the mechanisms responsible for turning on and turning off effects. Easily interpreted results in the form of oscilloscope-like plots assist in understanding the effects of parametric studies using topology oriented computer-aided analysis methods.

  5. The application of simulation modeling to the cost and performance ranking of solar thermal power plants

    NASA Technical Reports Server (NTRS)

    Rosenberg, L. S.; Revere, W. R.; Selcuk, M. K.

    1981-01-01

    Small solar thermal power systems (up to 10 MWe in size) were tested. The solar thermal power plant ranking study was performed to aid in experiment activity and support decisions for the selection of the most appropriate technological approach. The cost and performance were determined for insolation conditions by utilizing the Solar Energy Simulation computer code (SESII). This model optimizes the size of the collector field and energy storage subsystem for given engine generator and energy transport characteristics. The development of the simulation tool, its operation, and the results achieved from the analysis are discussed.

  6. An analytical design approach for self-powered active lateral secondary suspensions for railway vehicles

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Li, Hong; Zhang, Jiye; Mei, TX

    2015-10-01

    In this paper, an analytical design approach for the development of self-powered active suspensions is investigated and is applied to optimise the control system design for an active lateral secondary suspension for railway vehicles. The conditions for energy balance are analysed and the relationship between the ride quality improvement and energy consumption is discussed in detail. The modal skyhook control is applied to analyse the energy consumption of this suspension by separating its dynamics into the lateral and yaw modes, and based on a simplified model, the average power consumption of actuators is computed in frequency domain by using the power spectral density of lateral alignment of track irregularities. Then the impact of control gains and actuators' key parameters on the performance for both vibration suppressing and energy recovery/storage is analysed. Computer simulation is used to verify the obtained energy balance condition and to demonstrate that the improved ride comfort is achieved by this self-powered active suspension without any external power supply.

  7. A tool for modeling concurrent real-time computation

    NASA Technical Reports Server (NTRS)

    Sharma, D. D.; Huang, Shie-Rei; Bhatt, Rahul; Sridharan, N. S.

    1990-01-01

    Real-time computation is a significant area of research in general, and in AI in particular. The complexity of practical real-time problems demands use of knowledge-based problem solving techniques while satisfying real-time performance constraints. Since the demands of a complex real-time problem cannot be predicted (owing to the dynamic nature of the environment) powerful dynamic resource control techniques are needed to monitor and control the performance. A real-time computation model for a real-time tool, an implementation of the QP-Net simulator on a Symbolics machine, and an implementation on a Butterfly multiprocessor machine are briefly described.

  8. New insights into faster computation of uncertainties

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Atreyee

    2012-11-01

    Heavy computation power, lengthy simulations, and an exhaustive number of model runs—often these seem like the only statistical tools that scientists have at their disposal when computing uncertainties associated with predictions, particularly in cases of environmental processes such as groundwater movement. However, calculation of uncertainties need not be as lengthy, a new study shows. Comparing two approaches—the classical Bayesian “credible interval” and a less commonly used regression-based “confidence interval” method—Lu et al. show that for many practical purposes both methods provide similar estimates of uncertainties. The advantage of the regression method is that it demands 10-1000 model runs, whereas the classical Bayesian approach requires 10,000 to millions of model runs.

  9. I-deas TMG to NX Space Systems Thermal Model Conversion and Computational Performance Comparison

    NASA Technical Reports Server (NTRS)

    Somawardhana, Ruwan

    2011-01-01

    CAD/CAE packages change on a continuous basis as the power of the tools increase to meet demands. End -users must adapt to new products as they come to market and replace legacy packages. CAE modeling has continued to evolve and is constantly becoming more detailed and complex. Though this comes at the cost of increased computing requirements Parallel processing coupled with appropriate hardware can minimize computation time. Users of Maya Thermal Model Generator (TMG) are faced with transitioning from NX I -deas to NX Space Systems Thermal (SST). It is important to understand what differences there are when changing software packages We are looking for consistency in results.

  10. Balancing reliability and cost to choose the best power subsystem

    NASA Technical Reports Server (NTRS)

    Suich, Ronald C.; Patterson, Richard L.

    1991-01-01

    A mathematical model is presented for computing total (spacecraft) subsystem cost including both the basic subsystem cost and the expected cost due to the failure of the subsystem. This model is then used to determine power subsystem cost as a function of reliability and redundancy. Minimum cost and maximum reliability and/or redundancy are not generally equivalent. Two example cases are presented. One is a small satellite, and the other is an interplanetary spacecraft.

  11. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-06-05

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.

  12. A SINDA thermal model using CAD/CAE technologies

    NASA Technical Reports Server (NTRS)

    Rodriguez, Jose A.; Spencer, Steve

    1992-01-01

    The approach to thermal analysis described by this paper is a technique that incorporates Computer Aided Design (CAD) and Computer Aided Engineering (CAE) to develop a thermal model that has the advantages of Finite Element Methods (FEM) without abandoning the unique advantages of Finite Difference Methods (FDM) in the analysis of thermal systems. The incorporation of existing CAD geometry, the powerful use of a pre and post processor and the ability to do interdisciplinary analysis, will be described.

  13. Modelling the performance of the tapered artery heat pipe design for use in the radiator of the solar dynamic power system of the NASA Space Station

    NASA Technical Reports Server (NTRS)

    Evans, Austin Lewis

    1988-01-01

    The paper presents a computer program developed to model the steady-state performance of the tapered artery heat pipe for use in the radiator of the solar dynamic power system of the NASA Space Station. The program solves six governing equations to ascertain which one is limiting the maximum heat transfer rate of the heat pipe. The present model appeared to be slightly better than the LTV model in matching the 1-g data for the standard 15-ft test heat pipe.

  14. Creation of Power Reserves Under the Market Economy Conditions

    NASA Astrophysics Data System (ADS)

    Mahnitko, A.; Gerhards, J.; Lomane, T.; Ribakov, S.

    2008-09-01

    The main task of the control over an electric power system (EPS) is to ensure reliable power supply at the least cost. In this case, requirements to the electric power quality, power supply reliability and cost limitations on the energy resources must be observed. The available power reserve in an EPS is the necessary condition to keep it in operation with maintenance of normal operating variables (frequency, node voltage, power flows via the transmission lines, etc.). The authors examine possibilities to create power reserves that could be offered for sale by the electric power producer. They consider a procedure of price formation for the power reserves and propose a relevant mathematical model for a united EPS, the initial data being the fuel-cost functions for individual systems, technological limitations on the active power generation and consumers' load. As the criterion of optimization the maximum profit for the producer is taken. The model is exemplified by a concentrated EPS. The computations have been performed using the MATLAB program.

  15. Modeling Imperfect Generator Behavior in Power System Operation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krad, Ibrahim

    A key component in power system operations is the use of computer models to quickly study and analyze different operating conditions and futures in an efficient manner. The output of these models are sensitive to the data used in them as well as the assumptions made during their execution. One typical assumption is that generators and load assets perfectly follow operator control signals. While this is a valid simulation assumption, generators may not always accurately follow control signals. This imperfect response of generators could impact cost and reliability metrics. This paper proposes a generator model that capture this imperfect behaviormore » and examines its impact on production costs and reliability metrics using a steady-state power system operations model. Preliminary analysis shows that while costs remain relatively unchanged, there could be significant impacts on reliability metrics.« less

  16. Travel demand forecasting models: a comparison of EMME/2 and QUR II using a real-world network.

    DOT National Transportation Integrated Search

    2000-10-01

    In order to automate the travel demand forecasting process in urban transportation planning, a number of : commercial computer based travel demand forecasting models have been developed, which have provided : transportation planners with powerful and...

  17. Traction Drive Inverter Cooling with Submerged Liquid Jet Impingement on Microfinned Enhanced Surfaces (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waye, S.; Narumanchi, S.; Moreno, G.

    Jet impingement is one means to improve thermal management for power electronics in electric-drive traction vehicles. Jet impingement on microfin-enhanced surfaces further augments heat transfer and thermal performance. A channel flow heat exchanger from a commercial inverter was characterized as a baseline system for comparison with two new prototype designs using liquid jet impingement on plain and microfinned enhanced surfaces. The submerged jets can target areas with the highest heat flux to provide local cooling, such as areas under insulated-gate bipolar transistors and diode devices. Low power experiments, where four diodes were powered, dissipated 105 W of heat and weremore » used to validate computational fluid dynamics modeling of the baseline and prototype designs. Experiments and modeling used typical automotive flow rates using water-ethylene glycol as a coolant (50%-50% by volume). The computational fluid dynamics model was used to predict full inverter power heat dissipation. The channel flow and jet impingement configurations were tested at full inverter power of 40 to 100 kW (output power) on a dynamometer, translating to an approximate heat dissipation of 1 to 2 kW. With jet impingement, the cold plate material is not critical for the thermal pathway. A high-temperature plastic was used that could eventually be injection molded or formed, with the jets formed from a basic aluminum plate with orifices acting as nozzles. Long-term reliability of the jet nozzles and impingement on enhanced surfaces was examined. For jet impingement on microfinned surfaces, thermal performance increased 17%. Along with a weight reduction of approximately 3 kg, the specific power (kW/kg) increased by 36%, with an increase in power density (kW/L) of 12% compared with the baseline channel flow configuration.« less

  18. Short-term Power Load Forecasting Based on Balanced KNN

    NASA Astrophysics Data System (ADS)

    Lv, Xianlong; Cheng, Xingong; YanShuang; Tang, Yan-mei

    2018-03-01

    To improve the accuracy of load forecasting, a short-term load forecasting model based on balanced KNN algorithm is proposed; According to the load characteristics, the historical data of massive power load are divided into scenes by the K-means algorithm; In view of unbalanced load scenes, the balanced KNN algorithm is proposed to classify the scene accurately; The local weighted linear regression algorithm is used to fitting and predict the load; Adopting the Apache Hadoop programming framework of cloud computing, the proposed algorithm model is parallelized and improved to enhance its ability of dealing with massive and high-dimension data. The analysis of the household electricity consumption data for a residential district is done by 23-nodes cloud computing cluster, and experimental results show that the load forecasting accuracy and execution time by the proposed model are the better than those of traditional forecasting algorithm.

  19. The virtual digital nuclear power plant: A modern tool for supporting the lifecycle of VVER-based nuclear power units

    NASA Astrophysics Data System (ADS)

    Arkadov, G. V.; Zhukavin, A. P.; Kroshilin, A. E.; Parshikov, I. A.; Solov'ev, S. L.; Shishov, A. V.

    2014-10-01

    The article describes the "Virtual Digital VVER-Based Nuclear Power Plant" computerized system comprising a totality of verified initial data (sets of input data for a model intended for describing the behavior of nuclear power plant (NPP) systems in design and emergency modes of their operation) and a unified system of new-generation computation codes intended for carrying out coordinated computation of the variety of physical processes in the reactor core and NPP equipment. Experiments with the demonstration version of the "Virtual Digital VVER-Based NPP" computerized system has shown that it is in principle possible to set up a unified system of computation codes in a common software environment for carrying out interconnected calculations of various physical phenomena at NPPs constructed according to the standard AES-2006 project. With the full-scale version of the "Virtual Digital VVER-Based NPP" computerized system put in operation, the concerned engineering, design, construction, and operating organizations will have access to all necessary information relating to the NPP power unit project throughout its entire lifecycle. The domestically developed commercial-grade software product set to operate as an independently operating application to the project will bring about additional competitive advantages in the modern market of nuclear power technologies.

  20. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid.

    PubMed

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-02-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid.

  1. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid

    PubMed Central

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-01-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid1. PMID:29354654

  2. Full cell simulation and the evaluation of the buffer system on air-cathode microbial fuel cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.

    This paper presents a computational model of a single chamber, air-cathode MFC. The model considers losses due to mass transport, as well as biological and electrochemical reactions, in both the anode and cathode half-cells. Computational fluid dynamics and Monod-Nernst analysis are incorporated into the reactions for the anode biofilm and cathode Pt catalyst and biofilm. The integrated model provides a macro-perspective of the interrelation between the anode and cathode during power production, while incorporating microscale contributions of mass transport within the anode and cathode layers. Model considerations include the effects of pH (H +/OH – transport) and electric field-driven migrationmore » on concentration overpotential, effects of various buffers and various amounts of buffer on the pH in the whole reactor, and overall impacts on the power output of the MFC. The simulation results fit the experimental polarization and power density curves well. Further, this model provides insight regarding mass transport at varying current density regimes and quantitative delineation of overpotentials at the anode and cathode. Altogether, this comprehensive simulation is designed to accurately predict MFC performance based on fundamental fluid and kinetic relations and guide optimization of the MFC system.« less

  3. Full cell simulation and the evaluation of the buffer system on air-cathode microbial fuel cell

    DOE PAGES

    Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.; ...

    2017-02-23

    This paper presents a computational model of a single chamber, air-cathode MFC. The model considers losses due to mass transport, as well as biological and electrochemical reactions, in both the anode and cathode half-cells. Computational fluid dynamics and Monod-Nernst analysis are incorporated into the reactions for the anode biofilm and cathode Pt catalyst and biofilm. The integrated model provides a macro-perspective of the interrelation between the anode and cathode during power production, while incorporating microscale contributions of mass transport within the anode and cathode layers. Model considerations include the effects of pH (H +/OH – transport) and electric field-driven migrationmore » on concentration overpotential, effects of various buffers and various amounts of buffer on the pH in the whole reactor, and overall impacts on the power output of the MFC. The simulation results fit the experimental polarization and power density curves well. Further, this model provides insight regarding mass transport at varying current density regimes and quantitative delineation of overpotentials at the anode and cathode. Altogether, this comprehensive simulation is designed to accurately predict MFC performance based on fundamental fluid and kinetic relations and guide optimization of the MFC system.« less

  4. Remembrance of phases past: An autoregressive method for generating realistic atmospheres in simulations

    NASA Astrophysics Data System (ADS)

    Srinath, Srikar; Poyneer, Lisa A.; Rudy, Alexander R.; Ammons, S. M.

    2014-08-01

    The advent of expensive, large-aperture telescopes and complex adaptive optics (AO) systems has strengthened the need for detailed simulation of such systems from the top of the atmosphere to control algorithms. The credibility of any simulation is underpinned by the quality of the atmosphere model used for introducing phase variations into the incident photons. Hitherto, simulations which incorporate wind layers have relied upon phase screen generation methods that tax the computation and memory capacities of the platforms on which they run. This places limits on parameters of a simulation, such as exposure time or resolution, thus compromising its utility. As aperture sizes and fields of view increase the problem will only get worse. We present an autoregressive method for evolving atmospheric phase that is efficient in its use of computation resources and allows for variability in the power contained in frozen flow or stochastic components of the atmosphere. Users have the flexibility of generating atmosphere datacubes in advance of runs where memory constraints allow to save on computation time or of computing the phase at each time step for long exposure times. Preliminary tests of model atmospheres generated using this method show power spectral density and rms phase in accordance with established metrics for Kolmogorov models.

  5. Modeling and design of Galfenol unimorph energy harvesters

    NASA Astrophysics Data System (ADS)

    Deng, Zhangxian; Dapino, Marcelo J.

    2015-12-01

    This article investigates the modeling and design of vibration energy harvesters that utilize iron-gallium (Galfenol) as a magnetoelastic transducer. Galfenol unimorphs are of particular interest; however, advanced models and design tools are lacking for these devices. Experimental measurements are presented for various unimorph beam geometries. A maximum average power density of 24.4 {mW} {{cm}}-3 and peak power density of 63.6 {mW} {{cm}}-3 are observed. A modeling framework with fully coupled magnetoelastic dynamics, formulated as a 2D finite element model, and lumped-parameter electrical dynamics is presented and validated. A comprehensive parametric study considering pickup coil dimensions, beam thickness ratio, tip mass, bias magnet location, and remanent flux density (supplied by bias magnets) is developed for a 200 Hz, 9.8 {{m}} {{{s}}}-2 amplitude harmonic base excitation. For the set of optimal parameters, the maximum average power density and peak power density computed by the model are 28.1 and 97.6 {mW} {{cm}}-3, respectively.

  6. Real-time computing platform for spiking neurons (RT-spike).

    PubMed

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  7. Generative models for clinical applications in computational psychiatry.

    PubMed

    Frässle, Stefan; Yao, Yu; Schöbi, Dario; Aponte, Eduardo A; Heinzle, Jakob; Stephan, Klaas E

    2018-05-01

    Despite the success of modern neuroimaging techniques in furthering our understanding of cognitive and pathophysiological processes, translation of these advances into clinically relevant tools has been virtually absent until now. Neuromodeling represents a powerful framework for overcoming this translational deadlock, and the development of computational models to solve clinical problems has become a major scientific goal over the last decade, as reflected by the emergence of clinically oriented neuromodeling fields like Computational Psychiatry, Computational Neurology, and Computational Psychosomatics. Generative models of brain physiology and connectivity in the human brain play a key role in this endeavor, striving for computational assays that can be applied to neuroimaging data from individual patients for differential diagnosis and treatment prediction. In this review, we focus on dynamic causal modeling (DCM) and its use for Computational Psychiatry. DCM is a widely used generative modeling framework for functional magnetic resonance imaging (fMRI) and magneto-/electroencephalography (M/EEG) data. This article reviews the basic concepts of DCM, revisits examples where it has proven valuable for addressing clinically relevant questions, and critically discusses methodological challenges and recent methodological advances. We conclude this review with a more general discussion of the promises and pitfalls of generative models in Computational Psychiatry and highlight the path that lies ahead of us. This article is categorized under: Neuroscience > Computation Neuroscience > Clinical Neuroscience. © 2018 Wiley Periodicals, Inc.

  8. Computational Design of DNA-Binding Proteins.

    PubMed

    Thyme, Summer; Song, Yifan

    2016-01-01

    Predicting the outcome of engineered and naturally occurring sequence perturbations to protein-DNA interfaces requires accurate computational modeling technologies. It has been well established that computational design to accommodate small numbers of DNA target site substitutions is possible. This chapter details the basic method of design used in the Rosetta macromolecular modeling program that has been successfully used to modulate the specificity of DNA-binding proteins. More recently, combining computational design and directed evolution has become a common approach for increasing the success rate of protein engineering projects. The power of such high-throughput screening depends on computational methods producing multiple potential solutions. Therefore, this chapter describes several protocols for increasing the diversity of designed output. Lastly, we describe an approach for building comparative models of protein-DNA complexes in order to utilize information from homologous sequences. These models can be used to explore how nature modulates specificity of protein-DNA interfaces and potentially can even be used as starting templates for further engineering.

  9. A Novel Market-Oriented Dynamic Collaborative Cloud Service Platform

    NASA Astrophysics Data System (ADS)

    Hassan, Mohammad Mehedi; Huh, Eui-Nam

    In today's world the emerging Cloud computing (Weiss, 2007) offer a new computing model where resources such as computing power, storage, online applications and networking infrastructures can be shared as "services" over the internet. Cloud providers (CPs) are incentivized by the profits to be made by charging consumers for accessing these services. Consumers, such as enterprises, are attracted by the opportunity for reducing or eliminating costs associated with "in-house" provision of these services.

  10. Comparison among mathematical models of the photovoltaic cell for computer simulation purposes

    NASA Astrophysics Data System (ADS)

    Tofoli, Fernando Lessa; Pereira, Denis de Castro; Josias De Paula, Wesley; Moreira Vicente, Eduardo; Vicente, Paula dos Santos; Braga, Henrique Antonio Carvalho

    2017-07-01

    This paper presents a comparison among mathematical models used in the simulation of solar photovoltaic modules that can be easily integrated with power electronic converters. In order to perform the analysis, three models available in literature and also the physical model of the module in software PSIM® are used. Some results regarding the respective I × V and P × V curves are presented, while some advantages and eventual limitations are discussed. Besides, a DC-DC buck converter performs maximum power point tracking by using perturb and observe method, while the performance of each one of the aforementioned models is investigated.

  11. Applications of Multi-Agent Technology to Power Systems

    NASA Astrophysics Data System (ADS)

    Nagata, Takeshi

    Currently, agents are focus of intense on many sub-fields of computer science and artificial intelligence. Agents are being used in an increasingly wide variety of applications. Many important computing applications such as planning, process control, communication networks and concurrent systems will benefit from using multi-agent system approach. A multi-agent system is a structure given by an environment together with a set of artificial agents capable to act on this environment. Multi-agent models are oriented towards interactions, collaborative phenomena, and autonomy. This article presents the applications of multi-agent technology to the power systems.

  12. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-01

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  13. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices.more » Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.« less

  14. Combining Thermal And Structural Analyses

    NASA Technical Reports Server (NTRS)

    Winegar, Steven R.

    1990-01-01

    Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.

  15. Development of the simulation system {open_quotes}IMPACT{close_quotes} for analysis of nuclear power plant severe accidents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naitoh, Masanori; Ujita, Hiroshi; Nagumo, Hiroichi

    1997-07-01

    The Nuclear Power Engineering Corporation (NUPEC) has initiated a long-term program to develop the simulation system {open_quotes}IMPACT{close_quotes} for analysis of hypothetical severe accidents in nuclear power plants. IMPACT employs advanced methods of physical modeling and numerical computation, and can simulate a wide spectrum of senarios ranging from normal operation to hypothetical, beyond-design-basis-accident events. Designed as a large-scale system of interconnected, hierarchical modules, IMPACT`s distinguishing features include mechanistic models based on first principles and high speed simulation on parallel processing computers. The present plan is a ten-year program starting from 1993, consisting of the initial one-year of preparatory work followed bymore » three technical phases: Phase-1 for development of a prototype system; Phase-2 for completion of the simulation system, incorporating new achievements from basic studies; and Phase-3 for refinement through extensive verification and validation against test results and available real plant data.« less

  16. Data communication requirements for the advanced NAS network

    NASA Technical Reports Server (NTRS)

    Levin, Eugene; Eaton, C. K.; Young, Bruce

    1986-01-01

    The goal of the Numerical Aerodynamic Simulation (NAS) Program is to provide a powerful computational environment for advanced research and development in aeronautics and related disciplines. The present NAS system consists of a Cray 2 supercomputer connected by a data network to a large mass storage system, to sophisticated local graphics workstations, and by remote communications to researchers throughout the United States. The program plan is to continue acquiring the most powerful supercomputers as they become available. In the 1987/1988 time period it is anticipated that a computer with 4 times the processing speed of a Cray 2 will be obtained and by 1990 an additional supercomputer with 16 times the speed of the Cray 2. The implications of this 20-fold increase in processing power on the data communications requirements are described. The analysis was based on models of the projected workload and system architecture. The results are presented together with the estimates of their sensitivity to assumptions inherent in the models.

  17. Fan Noise Source Diagnostic Test Computation of Rotor Wake Turbulence Noise

    NASA Technical Reports Server (NTRS)

    Nallasamy, M.; Envia, E.; Thorp, S. A.; Shabbir, A.

    2002-01-01

    An important source mechanism of fan broadband noise is the interaction of rotor wake turbulence with the fan outlet guide vanes. A broadband noise model that utilizes computed rotor flow turbulence from a RANS code is used to predict fan broadband noise spectra. The noise model is employed to examine the broadband noise characteristics of the 22-inch Source Diagnostic Test fan rig for which broadband noise data were obtained in wind tunnel tests at the NASA Glenn Research Center. A 9-case matrix of three outlet guide vane configurations at three representative fan tip speeds are considered. For all cases inlet and exhaust acoustic power spectra are computed and compared with the measured spectra where possible. In general, the acoustic power levels and shape of the predicted spectra are in good agreement with the measured data. The predicted spectra show the experimentally observed trends with fan tip speed, vane count, and vane sweep. The results also demonstrate the validity of using CFD-based turbulence information for fan broadband noise calculations.

  18. Water Power Data and Tools | Water Power | NREL

    Science.gov Websites

    computer modeling tools and data with state-of-the-art design and analysis. Photo of a buoy designed around National Wind Technology Center's Information Portal as well as a WEC-Sim fact sheet. WEC Design Response Toolbox The WEC Design Response Toolbox provides extreme response and fatigue analysis tools specifically

  19. Computer-Aided Engineering for Electric-Drive Vehicle Batteries (CAEBAT)

    Science.gov Websites

    Laboratory Battery Design LLC CD-adapco EC Power ESim Ford General Motors (GM) Johnson Controls, Inc battery modeling" April 2013: R. Spotnitz, Design and Simulation of Spirally-Wound, Lithium-Ion Cells ;Effect of Tab Design on Large-Format Li-ion Cell Performance," Journal of Power Sources 257 70-79

  20. A Long-Term Model for the Curriculum of Training for an Electric-Power Specialist

    ERIC Educational Resources Information Center

    Venikov, V. A.

    1978-01-01

    Long-term planning for professional training of electric-power specialists in Russia will have to (1) recognize the need for specialists to adapt to unforeseen developments in the field, (2) include new mathematics, physics, and computer technology, and (3) be prepared for changes in methods of production and transformation of energy. (AV)

  1. Introducing Non-Newtonian Fluid Mechanics Computations with Mathematica in the Undergraduate Curriculum

    ERIC Educational Resources Information Center

    Binous, Housam

    2007-01-01

    We study four non-Newtonian fluid mechanics problems using Mathematica[R]. Constitutive equations describing the behavior of power-law, Bingham and Carreau models are recalled. The velocity profile is obtained for the horizontal flow of power-law fluids in pipes and annuli. For the vertical laminar film flow of a Bingham fluid we determine the…

  2. Power spectrum, correlation function, and tests for luminosity bias in the CfA redshift survey

    NASA Astrophysics Data System (ADS)

    Park, Changbom; Vogeley, Michael S.; Geller, Margaret J.; Huchra, John P.

    1994-08-01

    We describe and apply a method for directly computing the power spectrum for the galaxy distribution in the extension of the Center for Astrophysics Redshift Survey. Tests show that our technique accurately reproduces the true power spectrum for k greater than 0.03 h Mpc-1. The dense sampling and large spatial coverage of this survey allow accurate measurement of the redshift-space power spectrum on scales from 5 to approximately 200 h-1 Mpc. The power spectrum has slope n approximately equal -2.1 on small scales (lambda less than or equal 25 h-1 Mpc) and n approximately -1.1 on scales 30 less than lambda less than 120 h-1 Mpc. On larger scales the power spectrum flattens somewhat, but we do not detect a turnover. Comparison with N-body simulations of cosmological models shows that an unbiased, open universe CDM model (OMEGA h = 0.2) and a nonzero cosmological constant (CDM) model (OMEGA h = 0.24, lambdazero = 0.6, b = 1.3) match the CfA power spectrum over the wavelength range we explore. The standard biased CDM model (OMEGA h = 0.5, b = 1.5) fails (99% significance level) because it has insufficient power on scales lambda greater than 30 h-1 Mpc. Biased CDM with a normalization that matches the Cosmic Microwave Background (CMB) anisotropy (OMEGA h = 0.5, b = 1.4, sigma8 (mass) = 1) has too much power on small scales to match the observed galaxy power spectrum. This model with b = 1 matches both Cosmic Background Explorer Satellite (COBE) and the small-scale power spect rum but has insufficient power on scales lambda approximately 100 h-1 Mpc. We derive a formula for the effect of small-scale peculiar velocities on the power spectrum and combine this formula with the linear-regime amplification described by Kaiser to compute an estimate of the real-space power spectrum. Two tests reveal luminosity bias in the galaxy distribution: First, the amplitude of the power spectrum is approximately 40% larger for the brightest 50% of galaxies in volume-limited samples that have Mlim greater than M*. This bias in the power spectrum is independent of scale, consistent with the peaks-bias paradigm for galaxy formation. Second, the distribution of local density around galaxies shows that regions of moderate and high density contain both very bright (M less than M* = -19.2 + 5 log h) and fainter galaxies, but that voids preferentially harbor fainter galaxies (approximately 2 sigma significance level).

  3. EXTENDING THE REALM OF OPTIMIZATION FOR COMPLEX SYSTEMS: UNCERTAINTY, COMPETITION, AND DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shanbhag, Uday V; Basar, Tamer; Meyn, Sean

    Research reported addressed these topics: the development of analytical and algorithmic tools for distributed computation of Nash equilibria; synchronization in mean-field oscillator games, with an emphasis on learning and efficiency analysis; questions that combine learning and computation; questions including stochastic and mean-field games; modeling and control in the context of power markets.

  4. Personalization through the Application of Inverse Bayes to Student Modeling

    ERIC Educational Resources Information Center

    Lang, Charles William McLeod

    2015-01-01

    Personalization, the idea that teaching can be tailored to each students' needs, has been a goal for the educational enterprise for at least 2,500 years (Regian, Shute, & Shute, 2013, p.2). Recently personalization has picked up speed with the advent of mobile computing, the Internet and increases in computer processing power. These changes…

  5. Optimal design of a combustion chamber of gas turbine engine by a Combustion chamber 1D-2D computer program

    NASA Astrophysics Data System (ADS)

    Aleksandrov, Y. B.; Mingazov, B. G.

    2017-09-01

    The paper shows a method of modeling and optimization of processes in combustion chambers of gas turbine engines using a computer program developed by a team at the Department of Jet Engines and Power Plants (DJEPP) of Technical University named after A N Tupolev KNRTU-KAI.

  6. KINETICS OF LOW SOURCE REACTOR STARTUPS. PART II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    hurwitz, H. Jr.; MacMillan, D.B.; Smith, J.H.

    1962-06-01

    A computational technique is described for computation of the probability distribution of power level for a low source reactor startup. The technique uses a mathematical model, for the time-dependent probability distribution of neutron and precursor concentration, having finite neutron lifetime, one group of delayed neutron precursors, and no spatial dependence. Results obtained by the technique are given. (auth)

  7. The world as viewed by and with unpaired electrons.

    PubMed

    Eaton, Sandra S; Eaton, Gareth R

    2012-10-01

    Recent advances in electron paramagnetic resonance (EPR) include capabilities for applications to areas as diverse as archeology, beer shelf life, biological structure, dosimetry, in vivo imaging, molecular magnets, and quantum computing. Enabling technologies include multifrequency continuous wave, pulsed, and rapid scan EPR. Interpretation is enhanced by increasingly powerful computational models. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. The Adoption of Grid Computing Technology by Organizations: A Quantitative Study Using Technology Acceptance Model

    ERIC Educational Resources Information Center

    Udoh, Emmanuel E.

    2010-01-01

    Advances in grid technology have enabled some organizations to harness enormous computational power on demand. However, the prediction of widespread adoption of the grid technology has not materialized despite the obvious grid advantages. This situation has encouraged intense efforts to close the research gap in the grid adoption process. In this…

  9. Hierarchical, parallel computing strategies using component object model for process modelling responses of forest plantations to interacting multiple stresses

    Treesearch

    J. G. Isebrands; G. E. Host; K. Lenz; G. Wu; H. W. Stech

    2000-01-01

    Process models are powerful research tools for assessing the effects of multiple environmental stresses on forest plantations. These models are driven by interacting environmental variables and often include genetic factors necessary for assessing forest plantation growth over a range of different site, climate, and silvicultural conditions. However, process models are...

  10. Towards Effective Clustering Techniques for the Analysis of Electric Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Emilie A.; Cotilla Sanchez, Jose E.; Halappanavar, Mahantesh

    2013-11-30

    Clustering is an important data analysis technique with numerous applications in the analysis of electric power grids. Standard clustering techniques are oblivious to the rich structural and dynamic information available for power grids. Therefore, by exploiting the inherent topological and electrical structure in the power grid data, we propose new methods for clustering with applications to model reduction, locational marginal pricing, phasor measurement unit (PMU or synchrophasor) placement, and power system protection. We focus our attention on model reduction for analysis based on time-series information from synchrophasor measurement devices, and spectral techniques for clustering. By comparing different clustering techniques onmore » two instances of realistic power grids we show that the solutions are related and therefore one could leverage that relationship for a computational advantage. Thus, by contrasting different clustering techniques we make a case for exploiting structure inherent in the data with implications for several domains including power systems.« less

  11. Analyzing Power Supply and Demand on the ISS

    NASA Technical Reports Server (NTRS)

    Thomas, Justin; Pham, Tho; Halyard, Raymond; Conwell, Steve

    2006-01-01

    Station Power and Energy Evaluation Determiner (SPEED) is a Java application program for analyzing the supply and demand aspects of the electrical power system of the International Space Station (ISS). SPEED can be executed on any computer that supports version 1.4 or a subsequent version of the Java Runtime Environment. SPEED includes an analysis module, denoted the Simplified Battery Solar Array Model, which is a simplified engineering model of the ISS primary power system. This simplified model makes it possible to perform analyses quickly. SPEED also includes a user-friendly graphical-interface module, an input file system, a parameter-configuration module, an analysis-configuration-management subsystem, and an output subsystem. SPEED responds to input information on trajectory, shadowing, attitude, and pointing in either a state-of-charge mode or a power-availability mode. In the state-of-charge mode, SPEED calculates battery state-of-charge profiles, given a time-varying power-load profile. In the power-availability mode, SPEED determines the time-varying total available solar array and/or battery power output, given a minimum allowable battery state of charge.

  12. An adaptive model for vanadium redox flow battery and its application for online peak power estimation

    NASA Astrophysics Data System (ADS)

    Wei, Zhongbao; Meng, Shujuan; Tseng, King Jet; Lim, Tuti Mariana; Soong, Boon Hee; Skyllas-Kazacos, Maria

    2017-03-01

    An accurate battery model is the prerequisite for reliable state estimate of vanadium redox battery (VRB). As the battery model parameters are time varying with operating condition variation and battery aging, the common methods where model parameters are empirical or prescribed offline lacks accuracy and robustness. To address this issue, this paper proposes to use an online adaptive battery model to reproduce the VRB dynamics accurately. The model parameters are online identified with both the recursive least squares (RLS) and the extended Kalman filter (EKF). Performance comparison shows that the RLS is superior with respect to the modeling accuracy, convergence property, and computational complexity. Based on the online identified battery model, an adaptive peak power estimator which incorporates the constraints of voltage limit, SOC limit and design limit of current is proposed to fully exploit the potential of the VRB. Experiments are conducted on a lab-scale VRB system and the proposed peak power estimator is verified with a specifically designed "two-step verification" method. It is shown that different constraints dominate the allowable peak power at different stages of cycling. The influence of prediction time horizon selection on the peak power is also analyzed.

  13. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU

    PubMed Central

    Xia, Yong; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957

  14. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.

    PubMed

    Xia, Yong; Wang, Kuanquan; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.

  15. Turbulence modeling of free shear layers for high performance aircraft

    NASA Technical Reports Server (NTRS)

    Sondak, Douglas

    1993-01-01

    In many flowfield computations, accuracy of the turbulence model employed is frequently a limiting factor in the overall accuracy of the computation. This is particularly true for complex flowfields such as those around full aircraft configurations. Free shear layers such as wakes, impinging jets (in V/STOL applications), and mixing layers over cavities are often part of these flowfields. Although flowfields have been computed for full aircraft, the memory and CPU requirements for these computations are often excessive. Additional computer power is required for multidisciplinary computations such as coupled fluid dynamics and conduction heat transfer analysis. Massively parallel computers show promise in alleviating this situation, and the purpose of this effort was to adapt and optimize CFD codes to these new machines. The objective of this research effort was to compute the flowfield and heat transfer for a two-dimensional jet impinging normally on a cool plate. The results of this research effort were summarized in an AIAA paper titled 'Parallel Implementation of the k-epsilon Turbulence Model'. Appendix A contains the full paper.

  16. Electrostatic Precipitator (ESP) TRAINING MANUAL

    EPA Science Inventory

    The manual assists engineers in using a computer program, the ESPVI 4.0W, that models all elements of an electrostatic precipitator (ESP). The program is a product of the Electric Power Research Institute and runs in the Windows environment. Once an ESP is accurately modeled, the...

  17. A computer model for high-latitude phase scintillation based on wideband satellite data from Poker Flat

    NASA Astrophysics Data System (ADS)

    Fremouw, E. J.; Lansinger, J. M.

    1981-02-01

    A mathematical model has been developed for describing plasma-density irregularities responsible for radiowave scintillation produced in the auroral ionosphere, and the model has been committed to an applications-oriented computer code, WBMOD. The model characterizes the three-dimensional configuration, gradient sharpness, and height-integrated strength of irregularities represented by a power-law spatial spectrum as functions of geomagnetic latitude, time of day, sunspot number, and planetary geomagnetic activity index. Program WBMOD permits calculation of the power-law index and spectral strength (at a fluctuation frequency of 1 Hz) of phase scintillation, together with scintillation indices (variances) for phase and intensity, using a phase-screen scattering theory. The model has been calibrated and iteratively tested against phase-scintillation data from the DNA Wideband Satellite Experiment, collected at Poker Flat, Alaska. It does not account for seasonal variations in high-latitude scintillation observed in other longitude sectors. The program contains a model for middle-latitude and equatorial irregularities as well as for auroral latitudes, but only the latter has been tested extensively against high-quality scintillation data.

  18. ENEL overall PWR plant models and neutronic integrated computing systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedroni, G.; Pollachini, L.; Vimercati, G.

    1987-01-01

    To support the design activity of the Italian nuclear energy program for the construction of pressurized water reactors, the Italian Electricity Board (ENEL) needs to verify the design as a whole (that is, the nuclear steam supply system and balance of plant) both in steady-state operation and in transient. The ENEL has therefore developed two computer models to analyze both operational and incidental transients. The models, named STRIP and SFINCS, perform the analysis of the nuclear as well as the conventional part of the plant (the control system being properly taken into account). The STRIP model has been developed bymore » means of the French (Electricite de France) modular code SICLE, while SFINCS is based on the Italian (ENEL) modular code LEGO. STRIP validation was performed with respect to Fessenheim French power plant experimental data. Two significant transients were chosen: load step and total load rejection. SFINCS validation was performed with respect to Saint-Laurent French power plant experimental data and also by comparing the SFINCS-STRIP responses.« less

  19. Ultra-Short-Term Wind Power Prediction Using a Hybrid Model

    NASA Astrophysics Data System (ADS)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    This paper aims to develop and apply a hybrid model of two data analytical methods, multiple linear regressions and least square (MLR&LS), for ultra-short-term wind power prediction (WPP), for example taking, Northeast China electricity demand. The data was obtained from the historical records of wind power from an offshore region, and from a wind farm of the wind power plant in the areas. The WPP achieved in two stages: first, the ratios of wind power were forecasted using the proposed hybrid method, and then the transformation of these ratios of wind power to obtain forecasted values. The hybrid model combines the persistence methods, MLR and LS. The proposed method included two prediction types, multi-point prediction and single-point prediction. WPP is tested by applying different models such as autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA) and artificial neural network (ANN). By comparing results of the above models, the validity of the proposed hybrid model is confirmed in terms of error and correlation coefficient. Comparison of results confirmed that the proposed method works effectively. Additional, forecasting errors were also computed and compared, to improve understanding of how to depict highly variable WPP and the correlations between actual and predicted wind power.

  20. High-Frequency Switching Transients and Power Loss Estimation in Electric Drive Systems that Utilize Wide-Bandgap Semiconductors

    NASA Astrophysics Data System (ADS)

    Fulani, Olatunji T.

    Development of electric drive systems for transportation and industrial applications is rapidly seeing the use of wide-bandgap (WBG) based power semiconductor devices. These devices, such as SiC MOSFETs, enable high switching frequencies and are becoming the preferred choice in inverters because of their lower switching losses and higher allowable operating temperatures. Due to the much shorter turn-on and turn-off times and correspondingly larger output voltage edge rates, traditional models and methods previously used to estimate inverter and motor power losses, based upon a triangular power loss waveform, are no longer justifiable from a physical perspective. In this thesis, more appropriate models and a power loss calculation approach are described with the goal of more accurately estimating the power losses in WBG-based electric drive systems. Sine-triangle modulation with third harmonic injection is used to control the switching of the inverter. The motor and inverter models are implemented using Simulink and computer studies are shown illustrating the application of the new approach.

  1. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  2. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  3. Ocean-Atmosphere Coupled Model Simulations of Precipitation in the Central Andes

    NASA Technical Reports Server (NTRS)

    Nicholls, Stephen D.; Mohr, Karen I.

    2015-01-01

    The meridional extent and complex orography of the South American continent contributes to a wide diversity of climate regimes ranging from hyper-arid deserts to tropical rainforests to sub-polar highland regions. In addition, South American meteorology and climate are also made further complicated by ENSO, a powerful coupled ocean-atmosphere phenomenon. Modelling studies in this region have typically resorted to either atmospheric mesoscale or atmosphere-ocean coupled global climate models. The latter offers full physics and high spatial resolution, but it is computationally inefficient typically lack an interactive ocean, whereas the former offers high computational efficiency and ocean-atmosphere coupling, but it lacks adequate spatial and temporal resolution to adequate resolve the complex orography and explicitly simulate precipitation. Explicit simulation of precipitation is vital in the Central Andes where rainfall rates are light (0.5-5 mm hr-1), there is strong seasonality, and most precipitation is associated with weak mesoscale-organized convection. Recent increases in both computational power and model development have led to the advent of coupled ocean-atmosphere mesoscale models for both weather and climate study applications. These modelling systems, while computationally expensive, include two-way ocean-atmosphere coupling, high resolution, and explicit simulation of precipitation. In this study, we use the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST), a fully-coupled mesoscale atmosphere-ocean modeling system. Previous work has shown COAWST to reasonably simulate the entire 2003-2004 wet season (Dec-Feb) as validated against both satellite and model analysis data when ECMWF interim analysis data were used for boundary conditions on a 27-9-km grid configuration (Outer grid extent: 60.4S to 17.7N and 118.6W to 17.4W).

  4. Practical Methods for the Analysis of Voltage Collapse in Electric Power Systems: a Stationary Bifurcations Viewpoint.

    NASA Astrophysics Data System (ADS)

    Jean-Jumeau, Rene

    1993-03-01

    Voltage collapse (VC) is generally caused by either of two types of system disturbances: load variations and contingencies. In this thesis, we study VC resulting from load variations. This is termed static voltage collapse. This thesis deals with this type of voltage collapse in electrical power systems by using a stationary bifurcations viewpoint by associating it with the occurrence of saddle node bifurcations (SNB) in the system. Approximate models are generically used in most VC analyses. We consider the validity of these models for the study of SNB and, thus, of voltage collapse. We justify the use of saddle node bifurcation as a model for VC in power systems. In particular, we prove that this leads to definition of a model and--since load demand is used as a parameter for that model--of a mode of parameterization of that model in order to represent actual power demand variations within the power system network. Ill-conditioning of the set of nonlinear equations defining a dynamical system is a generic occurence near the SNB point. We suggest a reparameterization of the set of nonlinear equations which allows to avoid this problem. A new indicator for the proximity of voltage collapse, the voltage collapse index (VCI), is developed. A new (n + 1)-dimensional set of characteristic equations for the computation of the exact SNB point, replacing the standard (2n + 1)-dimensional one is presented for general parameter -dependent nonlinear dynamical systems. These results are then applied to electric power systems for the analysis and prediction of voltage collapse. The new methods offer the potential of faster computation and greater flexibility. For reasons of theoretical development and clarity, the preceding methodologies are developed under the assumption of the absence of constraints on the system parameters and states, and the full differentiability of the functions defining the power system model. In the latter part of this thesis, we relax these assumptions in order to develop a framework and new formulation for application of the tools previously developed for the analysis and prediction of voltage collapse in practical power system models which include numerous constraints and discontinuities. Illustrations and numerical simulations throughout the thesis support our results.

  5. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  6. Modeling Liquid Rocket Engine Atomization and Swirl/Coaxial Injectors

    DTIC Science & Technology

    2008-02-27

    47-61, 2004. 2. Yoon, S . S ., and Heister, S . D., "A Fully Nonlinear Model for Atomization of High - Speed Jets," Engineering Analysis with... Power , V20, pp 468-479, 2004. 5. Yoon, S . S ., and Heister, S . D., "Analytic Solutions for Computing Velocities Induced from Potential Vortex Ring...Heister, S . D., "Three Dimensional Flow Simulations in Recessed Region of a Coaxial Injector," J. Propulsion and Power , V21, No.4, pp. 728-742

  7. A 3D TCAD simulation of a thermoelectric module configured for thermoelectric power generation, cooling and heating

    NASA Astrophysics Data System (ADS)

    Gould, C. A.; Shammas, N. Y. A.; Grainger, S.; Taylor, I.; Simpson, K.

    2012-06-01

    This paper documents the 3D modeling and simulation of a three couple thermoelectric module using the Synopsys Technology Computer Aided Design (TCAD) semiconductor simulation software. Simulation results are presented for thermoelectric power generation, cooling and heating, and successfully demonstrate the basic thermoelectric principles. The 3D TCAD simulation model of a three couple thermoelectric module can be used in the future to evaluate different thermoelectric materials, device structures, and improve the efficiency and performance of thermoelectric modules.

  8. A Systems Model for Power Technology Assessment

    NASA Technical Reports Server (NTRS)

    Hoffman, David J.

    2002-01-01

    A computer model is under continuing development at NASA Glenn Research Center that enables first-order assessments of space power technology. The model, an evolution of NASA Glenn's Array Design Assessment Model (ADAM), is an Excel workbook that consists of numerous spreadsheets containing power technology performance data and sizing algorithms. Underlying the model is a number of databases that contain default values for various power generation, energy storage and power management and distribution component parameters. These databases are actively maintained by a team of systems analysts so that they contain state-of-art data as well as the most recent technology performance projections. Sizing of the power subsystems can be accomplished either by using an assumed mass specific power (W/kg) or energy (Wh/kg) or by a bottoms-up calculation that accounts for individual component performance and masses. The power generation, energy storage and power management and distribution subsystems are sized for given mission requirements for a baseline case and up to three alternatives. This allows four different power systems to be sized and compared using consistent assumptions and sizing algorithms. The component sizing models contained in the workbook are modular so that they can be easily maintained and updated. All significant input values have default values loaded from the databases that can be over-written by the user. The default data and sizing algorithms for each of the power subsystems are described in some detail. The user interface and workbook navigational features are also discussed. Finally, an example study case that illustrates the model's capability is presented.

  9. Contributions of the stochastic shape wake model to predictions of aerodynamic loads and power under single wake conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doubrawa, P.; Barthelmie, R. J.; Wang, H.

    The contribution of wake meandering and shape asymmetry to load and power estimates is quantified by comparing aeroelastic simulations initialized with different inflow conditions: an axisymmetric base wake, an unsteady stochastic shape wake, and a large-eddy simulation with rotating actuator-line turbine representation. Time series of blade-root and tower base bending moments are analyzed. We find that meandering has a large contribution to the fluctuation of the loads. Moreover, considering the wake edge intermittence via the stochastic shape model improves the simulation of load and power fluctuations and of the fatigue damage equivalent loads. Furthermore, these results indicate that the stochasticmore » shape wake simulator is a valuable addition to simplified wake models when seeking to obtain higher-fidelity computationally inexpensive predictions of loads and power.« less

  10. Contributions of the stochastic shape wake model to predictions of aerodynamic loads and power under single wake conditions

    DOE PAGES

    Doubrawa, P.; Barthelmie, R. J.; Wang, H.; ...

    2016-10-03

    The contribution of wake meandering and shape asymmetry to load and power estimates is quantified by comparing aeroelastic simulations initialized with different inflow conditions: an axisymmetric base wake, an unsteady stochastic shape wake, and a large-eddy simulation with rotating actuator-line turbine representation. Time series of blade-root and tower base bending moments are analyzed. We find that meandering has a large contribution to the fluctuation of the loads. Moreover, considering the wake edge intermittence via the stochastic shape model improves the simulation of load and power fluctuations and of the fatigue damage equivalent loads. Furthermore, these results indicate that the stochasticmore » shape wake simulator is a valuable addition to simplified wake models when seeking to obtain higher-fidelity computationally inexpensive predictions of loads and power.« less

  11. TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Nelson, J.; Jones, N.; Ames, D. P.

    2015-12-01

    Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.

  12. Profile modification computations for LHCD experiments on PBX-M using the TSC/LSC model

    NASA Astrophysics Data System (ADS)

    Kaita, R.; Ignat, D. W.; Jardin, S. C.; Okabayashi, M.; Sun, Y. C.

    1996-02-01

    The TSC-LSC computational model of the dynamics of lower hybrid current drive has been exercised extensively in comparison with data from a Princeton Beta Experiment-Modification (PBX-M) discharge where the measured q(0) attained values slightly above unity. Several significant, but plausible, assumptions had to be introduced to keep the computation from behaving pathologically over time, producing singular profiles of plasma current density and q. Addition of a heuristic current diffusion estimate, or more exactly, a smoothing of the rf-driven current with a diffusion-like equation, greatly improved the behavior of the computation, and brought theory and measurement into reasonable agreement. The model was then extended to longer pulse lengths and higher powers to investigate performance to be expected in future PBX-M current profile modification experiments.

  13. Pressure Loss Predictions of the Reactor Simulator Subsystem at NASA GRC

    NASA Technical Reports Server (NTRS)

    Reid, Terry V.

    2015-01-01

    Testing of the Fission Power System (FPS) Technology Demonstration Unit (TDU) is being conducted at NASA GRC. The TDU consists of three subsystems: the Reactor Simulator (RxSim), the Stirling Power Conversion Unit (PCU), and the Heat Exchanger Manifold (HXM). An Annular Linear Induction Pump (ALIP) is used to drive the working fluid. A preliminary version of the TDU system (which excludes the PCU for now), is referred to as the RxSim subsystem and was used to conduct flow tests in Vacuum Facility 6 (VF 6). In parallel, a computational model of the RxSim subsystem was created based on the CAD model and was used to predict loop pressure losses over a range of mass flows. This was done to assess the ability of the pump to meet the design intent mass flow demand. Measured data indicates that the pump can produce 2.333 kg/sec of flow, which is enough to supply the RxSim subsystem with a nominal flow of 1.75 kg/sec. Computational predictions indicated that the pump could provide 2.157 kg/sec (using the Spalart-Allmaras turbulence model), and 2.223 kg/sec (using the k-? turbulence model). The computational error of the predictions for the available mass flow is -0.176 kg/sec (with the S-A turbulence model) and -0.110 kg/sec (with the k-epsilon turbulence model) when compared to measured data.

  14. Flood Forecasting in Wales: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    How, Andrew; Williams, Christopher

    2015-04-01

    With steep, fast-responding river catchments, exposed coastal reaches with large tidal ranges and large population densities in some of the most at-risk areas; flood forecasting in Wales presents many varied challenges. Utilising advances in computing power and learning from best practice within the United Kingdom and abroad have seen significant improvements in recent years - however, many challenges still remain. Developments in computing and increased processing power comes with a significant price tag; greater numbers of data sources and ensemble feeds brings a better understanding of uncertainty but the wealth of data needs careful management to ensure a clear message of risk is disseminated; new modelling techniques utilise better and faster computation, but lack the history of record and experience gained from the continued use of more established forecasting models. As a flood forecasting team we work to develop coastal and fluvial forecasting models, set them up for operational use and manage the duty role that runs the models in real time. An overview of our current operational flood forecasting system will be presented, along with a discussion on some of the solutions we have in place to address the challenges we face. These include: • real-time updating of fluvial models • rainfall forecasting verification • ensemble forecast data • longer range forecast data • contingency models • offshore to nearshore wave transformation • calculation of wave overtopping

  15. Structure-based capacitance modeling and power loss analysis for the latest high-performance slant field-plate trench MOSFET

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kenya; Sudo, Masaki; Omura, Ichiro

    2018-04-01

    Field-plate trench MOSFETs (FP-MOSFETs), with the features of ultralow on-resistance and very low gate–drain charge, are currently the mainstream of high-performance applications and their advancement is continuing as low-voltage silicon power devices. However, owing to their structure, their output capacitance (C oss), which leads to main power loss, remains to be a problem, especially in megahertz switching. In this study, we propose a structure-based capacitance model of FP-MOSFETs for calculating power loss easily under various conditions. Appropriate equations were modeled for C oss curves as three divided components. Output charge (Q oss) and stored energy (E oss) that were calculated using the model corresponded well to technology computer-aided design (TCAD) simulation, and we validated the accuracy of the model quantitatively. In the power loss analysis of FP-MOSFETs, turn-off loss was sufficiently suppressed, however, mainly Q oss loss increased depending on switching frequency. This analysis reveals that Q oss may become a significant issue in next-generation high-efficiency FP-MOSFETs.

  16. Modeling driver behavior in a cognitive architecture.

    PubMed

    Salvucci, Dario D

    2006-01-01

    This paper explores the development of a rigorous computational model of driver behavior in a cognitive architecture--a computational framework with underlying psychological theories that incorporate basic properties and limitations of the human system. Computational modeling has emerged as a powerful tool for studying the complex task of driving, allowing researchers to simulate driver behavior and explore the parameters and constraints of this behavior. An integrated driver model developed in the ACT-R (Adaptive Control of Thought-Rational) cognitive architecture is described that focuses on the component processes of control, monitoring, and decision making in a multilane highway environment. This model accounts for the steering profiles, lateral position profiles, and gaze distributions of human drivers during lane keeping, curve negotiation, and lane changing. The model demonstrates how cognitive architectures facilitate understanding of driver behavior in the context of general human abilities and constraints and how the driving domain benefits cognitive architectures by pushing model development toward more complex, realistic tasks. The model can also serve as a core computational engine for practical applications that predict and recognize driver behavior and distraction.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Xiao; Blazek, Jonathan A.; McEwen, Joseph E.

    Cosmological perturbation theory is a powerful tool to predict the statistics of large-scale structure in the weakly non-linear regime, but even at 1-loop order it results in computationally expensive mode-coupling integrals. Here we present a fast algorithm for computing 1-loop power spectra of quantities that depend on the observer's orientation, thereby generalizing the FAST-PT framework (McEwen et al., 2016) that was originally developed for scalars such as the matter density. This algorithm works for an arbitrary input power spectrum and substantially reduces the time required for numerical evaluation. We apply the algorithm to four examples: intrinsic alignments of galaxies inmore » the tidal torque model; the Ostriker-Vishniac effect; the secondary CMB polarization due to baryon flows; and the 1-loop matter power spectrum in redshift space. Code implementing this algorithm and these applications is publicly available at https://github.com/JoeMcEwen/FAST-PT.« less

  18. EPPRD: An Efficient Privacy-Preserving Power Requirement and Distribution Aggregation Scheme for a Smart Grid.

    PubMed

    Zhang, Lei; Zhang, Jing

    2017-08-07

    A Smart Grid (SG) facilitates bidirectional demand-response communication between individual users and power providers with high computation and communication performance but also brings about the risk of leaking users' private information. Therefore, improving the individual power requirement and distribution efficiency to ensure communication reliability while preserving user privacy is a new challenge for SG. Based on this issue, we propose an efficient and privacy-preserving power requirement and distribution aggregation scheme (EPPRD) based on a hierarchical communication architecture. In the proposed scheme, an efficient encryption and authentication mechanism is proposed for better fit to each individual demand-response situation. Through extensive analysis and experiment, we demonstrate how the EPPRD resists various security threats and preserves user privacy while satisfying the individual requirement in a semi-honest model; it involves less communication overhead and computation time than the existing competing schemes.

  19. EPPRD: An Efficient Privacy-Preserving Power Requirement and Distribution Aggregation Scheme for a Smart Grid

    PubMed Central

    Zhang, Lei; Zhang, Jing

    2017-01-01

    A Smart Grid (SG) facilitates bidirectional demand-response communication between individual users and power providers with high computation and communication performance but also brings about the risk of leaking users’ private information. Therefore, improving the individual power requirement and distribution efficiency to ensure communication reliability while preserving user privacy is a new challenge for SG. Based on this issue, we propose an efficient and privacy-preserving power requirement and distribution aggregation scheme (EPPRD) based on a hierarchical communication architecture. In the proposed scheme, an efficient encryption and authentication mechanism is proposed for better fit to each individual demand-response situation. Through extensive analysis and experiment, we demonstrate how the EPPRD resists various security threats and preserves user privacy while satisfying the individual requirement in a semi-honest model; it involves less communication overhead and computation time than the existing competing schemes. PMID:28783122

  20. A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0

    PubMed Central

    Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.

    2014-01-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072

  1. The universal robot

    NASA Technical Reports Server (NTRS)

    Moravec, Hans

    1993-01-01

    Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today's best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in the next decade should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could do in the physical world what personal computers now do in the world of data - act on our behalf as literal-minded slaves. Growing computer power over the next half-century will allow this reptile stage to be surpassed, in stages producing robots that learn like mammals, model their world like primates, and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended some of its inherited limitations and so transformed itself into something quite new.

  2. The universal robot

    NASA Astrophysics Data System (ADS)

    Moravec, Hans

    1993-12-01

    Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today's best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in the next decade should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could do in the physical world what personal computers now do in the world of data - act on our behalf as literal-minded slaves. Growing computer power over the next half-century will allow this reptile stage to be surpassed, in stages producing robots that learn like mammals, model their world like primates, and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended some of its inherited limitations and so transformed itself into something quite new.

  3. Fuzzy Logic Based Controller for a Grid-Connected Solid Oxide Fuel Cell Power Plant.

    PubMed

    Chatterjee, Kalyan; Shankar, Ravi; Kumar, Amit

    2014-10-01

    This paper describes a mathematical model of a solid oxide fuel cell (SOFC) power plant integrated in a multimachine power system. The utilization factor of a fuel stack maintains steady state by tuning the fuel valve in the fuel processor at a rate proportional to a current drawn from the fuel stack. A suitable fuzzy logic control is used for the overall system, its objective being controlling the current drawn by the power conditioning unit and meet a desirable output power demand. The proposed control scheme is verified through computer simulations.

  4. Refrigeration for Cryogenic Sensors

    NASA Technical Reports Server (NTRS)

    Gasser, M. G. (Editor)

    1983-01-01

    Research in cryogenically cooled refrigerators is discussed. Low-power Stirling cryocoolers; spacecraft-borne long-life units; heat exchangers; performance tests; split-stirling, linear-resonant, cryogenic refrigerators; and computer models are among the topics discussed.

  5. Framework Resources Multiply Computing Power

    NASA Technical Reports Server (NTRS)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  6. CFD code calibration and inlet-fairing effects on a 3D hypersonic powered-simulation model

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Tatum, Kenneth E.

    1993-01-01

    A three-dimensional (3D) computational study has been performed addressing issues related to the wind tunnel testing of a hypersonic powered-simulation model. The study consisted of three objectives. The first objective was to calibrate a state-of-the-art computational fluid dynamics (CFD) code in its ability to predict hypersonic powered-simulation flows by comparing CFD solutions with experimental surface pressure dam. Aftbody lower surface pressures were well predicted, but lower surface wing pressures were less accurately predicted. The second objective was to determine the 3D effects on the aftbody created by fairing over the inlet; this was accomplished by comparing the CFD solutions of two closed-inlet powered configurations with a flowing-inlet powered configuration. Although results at four freestream Mach numbers indicate that the exhaust plume tends to isolate the aftbody surface from most forebody flowfield differences, a smooth inlet fairing provides the least aftbody force and moment variation compared to a flowing inlet. The final objective was to predict and understand the 3D characteristics of exhaust plume development at selected points on a representative flight path. Results showed a dramatic effect of plume expansion onto the wings as the freestream Mach number and corresponding nozzle pressure ratio are increased.

  7. CFD Code Calibration and Inlet-Fairing Effects On a 3D Hypersonic Powered-Simulation Model

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Tatum, Kenneth E.

    1993-01-01

    A three-dimensional (3D) computational study has been performed addressing issues related to the wind tunnel testing of a hypersonic powered-simulation model. The study consisted of three objectives. The first objective was to calibrate a state-of-the-art computational fluid dynamics (CFD) code in its ability to predict hypersonic powered-simulation flows by comparing CFD solutions with experimental surface pressure data. Aftbody lower surface pressures were well predicted, but lower surface wing pressures were less accurately predicted. The second objective was to determine the 3D effects on the aftbody created by fairing over the inlet; this was accomplished by comparing the CFD solutions of two closed-inlet powered configurations with a flowing- inlet powered configuration. Although results at four freestream Mach numbers indicate that the exhaust plume tends to isolate the aftbody surface from most forebody flow- field differences, a smooth inlet fairing provides the least aftbody force and moment variation compared to a flowing inlet. The final objective was to predict and understand the 3D characteristics of exhaust plume development at selected points on a representative flight path. Results showed a dramatic effect of plume expansion onto the wings as the freestream Mach number and corresponding nozzle pressure ratio are increased.

  8. Three dimensional numerical prediction of icing related power and energy losses on a wind turbine

    NASA Astrophysics Data System (ADS)

    Sagol, Ece

    Regions of Canada experience harsh winter conditions that may persist for several months. Consequently, wind turbines located in these regions are exposed to ice accretion and its adverse effects, from loss of power to ceasing to function altogether. Since the weather-related annual energy production loss of a turbine may be as high as 16% of the nominal production for Canada, estimating these losses before the construction of a wind farm is essential for investors. A literature survey shows that most icing prediction methods and codes are developed for aircraft, and, as this information is mostly considered corporate intellectual property, it is not accessible to researchers in other domains. Moreover, aircraft icing is quite different from wind turbine icing. Wind turbines are exposed to icing conditions for much longer periods than aircraft, perhaps for several days in a harsh climate, whereas the maximum length of exposure of an aircraft is about 3-4 hours. In addition, wind turbine blades operate at subsonic speeds, at lower Reynolds numbers than aircraft, and their physical characteristics are different. A few icing codes have been developed for wind turbine icing nevertheless. However, they are either in 2D, which does not consider the 3D characteristics of the flow field, or they focus on simulating each rotation in a time-dependent manner, which is not practical for computing long hours of ice accretion. Our objective in this thesis is to develop a 3D numerical methodology to predict rime ice shape and the power loss of a wind turbine as a function of wind farm icing conditions. In addition, we compute the Annual Energy Production of a sample turbine under both clean and icing conditions. The sample turbine we have selected is the NREL Phase VI experimental wind turbine installed on a wind farm in Sweden, the icing events at which have been recorded and published. The proposed method is based on computing and validating the clean performance of the turbine, and then computing the ice shape and iced blade performance, under icing conditions. The first step is to compute the performance of the NREL Phase VI using the commercial ANSYSFLUENT computational fluid dynamics (CFD) tool. In order to reduce the computational cost, we use a rotating reference frame model which computes the flow properties as time-averaged quantities. A grid sensitivity study has been performed to eliminate the effect of mesh on the results. Of the existing models for characterizing turbulence, we have selected the two-equation SST k-pi model. In general, the computed pressure coefficients and bending moment have shown good agreement with the experimental data, particularly at pre-stall speeds. Although the torque deviates from the experimental data, the trend with respect to the wind speed is similar. After the clean power curve has been computed, collection efficiency, which is directly proportional to the rate of icing of a surface, is analyzed. A multiphase analysis, for the air and water phases, is necessary to compute the rate of accumulation of the droplets on the blade surfaces. We study two different approaches that are found in the literature -- Eulerian and Lagrangian -- and determine the most suitable one for our study case. The former applies the governing equations to the liquid phase, while the latter computes the trajectory of each droplet present in the air. We eventually decided on the Eulerian model for our study, as it can be adapted to handle large and complex meshes better than the Lagrangian model. This step is validated on a NACA 0012 airfoil, as experimental data for 3D flows are not available in the literature. The ice accretion on the sample wind turbine blades is computed using both a Quasi-3D and a Fully-3D method, which have a similar theoretical background, but a different order of modeling. In the former, all the steps are carried out in 2D and the overall power is computed using the Blade Element Momentum method, while the latter performs all the steps in the 3D domain. The Fully-3D method yields more accurate predictions for a clean blade. For icing conditions, a validation is not possible, owing to the lack of experimental data. However, the two methods produce quite different results for the performance of the ice shape and the iced blade. A critical analysis of the results shows that, although the computational cost of the Fully-3D method is much higher, icing analyses in 2D may lack accuracy, because the ice shape and the related power loss are compromised by not considering the 3D features of rotational flow. While performing the CFD computations on the iced blade, the rough surface of the ice is smoothed to a degree, in order to prevent numerical instability and to keep the mesh size within a reasonable limit. However, roughness effects cannot be excluded altogether, as they contribute significantly to performance reduction. We consider roughness through a modification in the CFD code, and assess its effect on performance for the clean blade.

  9. Verification of a VRF Heat Pump Computer Model in EnergyPlus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigusse, Bereket; Raustad, Richard

    2013-06-15

    This paper provides verification results of the EnergyPlus variable refrigerant flow (VRF) heat pump computer model using manufacturer's performance data. The paper provides an overview of the VRF model, presents the verification methodology, and discusses the results. The verification provides quantitative comparison of full and part-load performance to manufacturer's data in cooling-only and heating-only modes of operation. The VRF heat pump computer model uses dual range bi-quadratic performance curves to represent capacity and Energy Input Ratio (EIR) as a function of indoor and outdoor air temperatures, and dual range quadratic performance curves as a function of part-load-ratio for modeling part-loadmore » performance. These performance curves are generated directly from manufacturer's published performance data. The verification compared the simulation output directly to manufacturer's performance data, and found that the dual range equation fit VRF heat pump computer model predicts the manufacturer's performance data very well over a wide range of indoor and outdoor temperatures and part-load conditions. The predicted capacity and electric power deviations are comparbale to equation-fit HVAC computer models commonly used for packaged and split unitary HVAC equipment.« less

  10. Computing volume potentials for noninvasive imaging of cardiac excitation.

    PubMed

    van der Graaf, A W Maurits; Bhagirath, Pranav; van Driel, Vincent J H M; Ramanna, Hemanth; de Hooge, Jacques; de Groot, Natasja M S; Götte, Marco J W

    2015-03-01

    In noninvasive imaging of cardiac excitation, the use of body surface potentials (BSP) rather than body volume potentials (BVP) has been favored due to enhanced computational efficiency and reduced modeling effort. Nowadays, increased computational power and the availability of open source software enable the calculation of BVP for clinical purposes. In order to illustrate the possible advantages of this approach, the explanatory power of BVP is investigated using a rectangular tank filled with an electrolytic conductor and a patient specific three dimensional model. MRI images of the tank and of a patient were obtained in three orthogonal directions using a turbo spin echo MRI sequence. MRI images were segmented in three dimensional using custom written software. Gmsh software was used for mesh generation. BVP were computed using a transfer matrix and FEniCS software. The solution for 240,000 nodes, corresponding to a resolution of 5 mm throughout the thorax volume, was computed in 3 minutes. The tank experiment revealed that an increased electrode surface renders the position of the 4 V equipotential plane insensitive to mesh cell size and reduces simulated deviations. In the patient-specific model, the impact of assigning a different conductivity to lung tissue on the distribution of volume potentials could be visualized. Generation of high quality volume meshes and computation of BVP with a resolution of 5 mm is feasible using generally available software and hardware. Estimation of BVP may lead to an improved understanding of the genesis of BSP and sources of local inaccuracies. © 2014 Wiley Periodicals, Inc.

  11. Computational complexity of symbolic dynamics at the onset of chaos

    NASA Astrophysics Data System (ADS)

    Lakdawala, Porus

    1996-05-01

    In a variety of studies of dynamical systems, the edge of order and chaos has been singled out as a region of complexity. It was suggested by Wolfram, on the basis of qualitative behavior of cellular automata, that the computational basis for modeling this region is the universal Turing machine. In this paper, following a suggestion of Crutchfield, we try to show that the Turing machine model may often be too powerful as a computational model to describe the boundary of order and chaos. In particular we study the region of the first accumulation of period doubling in unimodal and bimodal maps of the interval, from the point of view of language theory. We show that in relation to the ``extended'' Chomsky hierarchy, the relevant computational model in the unimodal case is the nested stack automaton or the related indexed languages, while the bimodal case is modeled by the linear bounded automaton or the related context-sensitive languages.

  12. Three-dimensional computational fluid dynamics modelling and experimental validation of the Jülich Mark-F solid oxide fuel cell stack

    NASA Astrophysics Data System (ADS)

    Nishida, R. T.; Beale, S. B.; Pharoah, J. G.; de Haart, L. G. J.; Blum, L.

    2018-01-01

    This work is among the first where the results of an extensive experimental research programme are compared to performance calculations of a comprehensive computational fluid dynamics model for a solid oxide fuel cell stack. The model, which combines electrochemical reactions with momentum, heat, and mass transport, is used to obtain results for an established industrial-scale fuel cell stack design with complex manifolds. To validate the model, comparisons with experimentally gathered voltage and temperature data are made for the Jülich Mark-F, 18-cell stack operating in a test furnace. Good agreement is obtained between the model and experiment results for cell voltages and temperature distributions, confirming the validity of the computational methodology for stack design. The transient effects during ramp up of current in the experiment may explain a lower average voltage than model predictions for the power curve.

  13. Shaping of the axial power density distribution in the core to minimize the vapor volume fraction at the outlet of the VVER-1200 fuel assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savander, V. I.; Shumskiy, B. E., E-mail: borisshumskij@yandex.ru; Pinegin, A. A.

    The possibility of decreasing the vapor fraction at the VVER-1200 fuel assembly outlet by shaping the axial power density field is considered. The power density field was shaped by axial redistribution of the concentration of the burnable gadolinium poison in the Gd-containing fuel rods. The mathematical modeling of the VVER-1200 core was performed using the NOSTRA computer code.

  14. Technique Developed for Optimizing Traveling-Wave Tubes

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.

    1999-01-01

    A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT s are critical components in deep-space probes, geosynchronous communication satellites, and high-power radar systems. Power efficiency is of paramount importance for TWT s employed in deep-space probes and communications satellites. Consequently, increasing the power efficiency of TWT s has been the primary goal of the TWT group at the NASA Lewis Research Center over the last 25 years. An in-house effort produced a technique (ref. 1) to design TWT's for optimized power efficiency. This technique is based on simulated annealing, which has an advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 2). A simulated annealing algorithm was created and integrated into the NASA TWT computer model (ref. 3). The new technique almost doubled the computed conversion power efficiency of a TWT from 7.1 to 13.5 percent (ref. 1).

  15. Multidimensional computer simulation of Stirling cycle engines

    NASA Technical Reports Server (NTRS)

    Hall, C. A.; Porsching, T. A.; Medley, J.; Tew, R. C.

    1990-01-01

    The computer code ALGAE (algorithms for the gas equations) treats incompressible, thermally expandable, or locally compressible flows in complicated two-dimensional flow regions. The solution method, finite differencing schemes, and basic modeling of the field equations in ALGAE are applicable to engineering design settings of the type found in Stirling cycle engines. The use of ALGAE to model multiple components of the space power research engine (SPRE) is reported. Videotape computer simulations of the transient behavior of the working gas (helium) in the heater-regenerator-cooler complex of the SPRE demonstrate the usefulness of such a program in providing information on thermal and hydraulic phenomena in multiple component sections of the SPRE.

  16. How does the brain solve visual object recognition?

    PubMed Central

    Zoccolan, Davide; Rust, Nicole C.

    2012-01-01

    Mounting evidence suggests that “core object recognition,” the ability to rapidly recognize objects despite substantial appearance variation, is solved in the brain via a cascade of reflexive, largely feedforward computations that culminate in a powerful neuronal representation in the inferior temporal cortex. However, the algorithm that produces this solution remains little-understood. Here we review evidence ranging from individual neurons, to neuronal populations, to behavior, to computational models. We propose that understanding this algorithm will require using neuronal and psychophysical data to sift through many computational models, each based on building blocks of small, canonical sub-networks with a common functional goal. PMID:22325196

  17. Design Strategy for a Formally Verified Reliable Computing Platform

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Caldwell, James L.; DiVito, Ben L.

    1991-01-01

    This paper presents a high-level design for a reliable computing platform for real-time control applications. The design tradeoffs and analyses related to the development of a formally verified reliable computing platform are discussed. The design strategy advocated in this paper requires the use of techniques that can be completely characterized mathematically as opposed to more powerful or more flexible algorithms whose performance properties can only be analyzed by simulation and testing. The need for accurate reliability models that can be related to the behavior models is also stressed. Tradeoffs between reliability and voting complexity are explored. In particular, the transient recovery properties of the system are found to be fundamental to both the reliability analysis as well as the "correctness" models.

  18. Implementing Nonlinear Buoyancy and Excitation Forces in the WEC-Sim Wave Energy Converter Modeling Tool: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawson, M.; Yu, Y. H.; Nelessen, A.

    2014-05-01

    Wave energy converters (WECs) are commonly designed and analyzed using numerical models that combine multi-body dynamics with hydrodynamic models based on the Cummins Equation and linearized hydrodynamic coefficients. These modeling methods are attractive design tools because they are computationally inexpensive and do not require the use of high performance computing resources necessitated by high-fidelity methods, such as Navier Stokes computational fluid dynamics. Modeling hydrodynamics using linear coefficients assumes that the device undergoes small motions and that the wetted surface area of the devices is approximately constant. WEC devices, however, are typically designed to undergo large motions in order to maximizemore » power extraction, calling into question the validity of assuming that linear hydrodynamic models accurately capture the relevant fluid-structure interactions. In this paper, we study how calculating buoyancy and Froude-Krylov forces from the instantaneous position of a WEC device (referred to as instantaneous buoyancy and Froude-Krylov forces from herein) changes WEC simulation results compared to simulations that use linear hydrodynamic coefficients. First, we describe the WEC-Sim tool used to perform simulations and how the ability to model instantaneous forces was incorporated into WEC-Sim. We then use a simplified one-body WEC device to validate the model and to demonstrate how accounting for these instantaneously calculated forces affects the accuracy of simulation results, such as device motions, hydrodynamic forces, and power generation.« less

  19. Service Mediation and Negotiation Bootstrapping as First Achievements Towards Self-adaptable Cloud Services

    NASA Astrophysics Data System (ADS)

    Brandic, Ivona; Music, Dejan; Dustdar, Schahram

    Nowadays, novel computing paradigms as for example Cloud Computing are gaining more and more on importance. In case of Cloud Computing users pay for the usage of the computing power provided as a service. Beforehand they can negotiate specific functional and non-functional requirements relevant for the application execution. However, providing computing power as a service bears different research challenges. On one hand dynamic, versatile, and adaptable services are required, which can cope with system failures and environmental changes. On the other hand, human interaction with the system should be minimized. In this chapter we present the first results in establishing adaptable, versatile, and dynamic services considering negotiation bootstrapping and service mediation achieved in context of the Foundations of Self-Governing ICT Infrastructures (FoSII) project. We discuss novel meta-negotiation and SLA mapping solutions for Cloud services bridging the gap between current QoS models and Cloud middleware and representing important prerequisites for the establishment of autonomic Cloud services.

  20. Reliability modeling of fault-tolerant computer based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1987-01-01

    Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.

  1. Active optical control system design of the SONG-China Telescope

    NASA Astrophysics Data System (ADS)

    Ye, Yu; Kou, Songfeng; Niu, Dongsheng; Li, Cheng; Wang, Guomin

    2012-09-01

    The standard SONG node structure of control system is presented. The active optical control system of the project is a distributed system, and a host computer and a slave intelligent controller are included. The host control computer collects the information from wave front sensor and sends commands to the slave computer to realize a closed loop model. For intelligent controller, a programmable logic controller (PLC) system is used. This system combines with industrial personal computer (IPC) and PLC to make up a control system with powerful and reliable.

  2. Interactomes to Biological Phase Space: a call to begin thinking at a new level in computational biology.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, George S.; Brown, William Michael

    2007-09-01

    Techniques for high throughput determinations of interactomes, together with high resolution protein collocalizations maps within organelles and through membranes will soon create a vast resource. With these data, biological descriptions, akin to the high dimensional phase spaces familiar to physicists, will become possible. These descriptions will capture sufficient information to make possible realistic, system-level models of cells. The descriptions and the computational models they enable will require powerful computing techniques. This report is offered as a call to the computational biology community to begin thinking at this scale and as a challenge to develop the required algorithms and codes tomore » make use of the new data.3« less

  3. Design, Assembly, Integration, and Testing of a Power Processing Unit for a Cylindrical Hall Thruster, the NORSAT-2 Flatsat, and the Vector Gravimeter for Asteroids Instrument Computer

    NASA Astrophysics Data System (ADS)

    Svatos, Adam Ladislav

    This thesis describes the author's contributions to three separate projects. The bus of the NORSAT-2 satellite was developed by the Space Flight Laboratory (SFL) for the Norwegian Space Centre (NSC) and Space Norway. The author's contributions to the mission were performing unit tests for the components of all the spacecraft subsystems as well as designing and assembling the flatsat from flight spares. Gedex's Vector Gravimeter for Asteroids (VEGA) is an accelerometer for spacecraft. The author's contributions to this payload were modifying the instrument computer board schematic, designing the printed circuit board, developing and applying test software, and performing thermal acceptance testing of two instrument computer boards. The SFL's cylindrical Hall effect thruster combines the cylindrical configuration for a Hall thruster and uses permanent magnets to achieve miniaturization and low power consumption, respectively. The author's contributions were to design, build, and test an engineering model power processing unit.

  4. Controlling the phase locking of stochastic magnetic bits for ultra-low power computation

    NASA Astrophysics Data System (ADS)

    Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie

    2016-07-01

    When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm3, the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10-13 J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators.

  5. Controlling the phase locking of stochastic magnetic bits for ultra-low power computation.

    PubMed

    Mizrahi, Alice; Locatelli, Nicolas; Lebrun, Romain; Cros, Vincent; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Querlioz, Damien; Grollier, Julie

    2016-07-26

    When fabricating magnetic memories, one of the main challenges is to maintain the bit stability while downscaling. Indeed, for magnetic volumes of a few thousand nm(3), the energy barrier between magnetic configurations becomes comparable to the thermal energy at room temperature. Then, switches of the magnetization spontaneously occur. These volatile, superparamagnetic nanomagnets are generally considered useless. But what if we could use them as low power computational building blocks? Remarkably, they can oscillate without the need of any external dc drive, and despite their stochastic nature, they can beat in unison with an external periodic signal. Here we show that the phase locking of superparamagnetic tunnel junctions can be induced and suppressed by electrical noise injection. We develop a comprehensive model giving the conditions for synchronization, and predict that it can be achieved with a total energy cost lower than 10(-13) J. Our results open the path to ultra-low power computation based on the controlled synchronization of oscillators.

  6. Wind Farm Layout Optimization through a Crossover-Elitist Evolutionary Algorithm performed over a High Performing Analytical Wake Model

    NASA Astrophysics Data System (ADS)

    Kirchner-Bossi, Nicolas; Porté-Agel, Fernando

    2017-04-01

    Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.

  7. General Algebraic Modeling System Tutorial | High-Performance Computing |

    Science.gov Websites

    power generation from two different fuels. The goal is to minimize the cost for one of the fuels while Here's a basic tutorial for modeling optimization problems with the General Algebraic Modeling System (GAMS). Overview The GAMS (General Algebraic Modeling System) package is essentially a compiler for a

  8. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing

    PubMed Central

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P.; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique. PMID:28085932

  9. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    PubMed

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  10. Systematic Computation of Nonlinear Cellular and Molecular Dynamics with Low-Power CytoMimetic Circuits: A Simulation Study

    PubMed Central

    Papadimitriou, Konstantinos I.; Stan, Guy-Bart V.; Drakakis, Emmanuel M.

    2013-01-01

    This paper presents a novel method for the systematic implementation of low-power microelectronic circuits aimed at computing nonlinear cellular and molecular dynamics. The method proposed is based on the Nonlinear Bernoulli Cell Formalism (NBCF), an advanced mathematical framework stemming from the Bernoulli Cell Formalism (BCF) originally exploited for the modular synthesis and analysis of linear, time-invariant, high dynamic range, logarithmic filters. Our approach identifies and exploits the striking similarities existing between the NBCF and coupled nonlinear ordinary differential equations (ODEs) typically appearing in models of naturally encountered biochemical systems. The resulting continuous-time, continuous-value, low-power CytoMimetic electronic circuits succeed in simulating fast and with good accuracy cellular and molecular dynamics. The application of the method is illustrated by synthesising for the first time microelectronic CytoMimetic topologies which simulate successfully: 1) a nonlinear intracellular calcium oscillations model for several Hill coefficient values and 2) a gene-protein regulatory system model. The dynamic behaviours generated by the proposed CytoMimetic circuits are compared and found to be in very good agreement with their biological counterparts. The circuits exploit the exponential law codifying the low-power subthreshold operation regime and have been simulated with realistic parameters from a commercially available CMOS process. They occupy an area of a fraction of a square-millimetre, while consuming between 1 and 12 microwatts of power. Simulations of fabrication-related variability results are also presented. PMID:23393550

  11. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-02-05

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  12. Budget-based power consumption for application execution on a plurality of compute nodes

    DOEpatents

    Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D

    2012-10-23

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  13. Transient Simulation of the Multi-SERTTA Experiment with MAMMOTH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortensi, Javier; Baker, Benjamin; Wang, Yaqi

    This work details the MAMMOTH reactor physics simulations of the Static Environment Rodlet Transient Test Apparatus (SERTTA) conducted at Idaho National Laboratory in FY-2017. TREAT static-environment experiment vehicles are being developed to enable transient testing of Pressurized Water Reactor (PWR) type fuel specimens, including fuel concepts with enhanced accident tolerance (Accident Tolerant Fuels, ATF). The MAMMOTH simulations include point reactor kinetics as well as spatial dynamics for a temperature-limited transient. The strongly coupled multi-physics solutions of the neutron flux and temperature fields are second order accurate both in the spatial and temporal domains. MAMMOTH produces pellet stack powers that are within 1.5% of the Monte Carlo reference solutions. Some discrepancies between the MCNP model used in the design of the flux collars and the Serpent/MAMMOTH models lead to higher power and energy deposition values in Multi-SERTTA unit 1. The TREAT core results compare well with the safety case computed with point reactor kinetics in RELAP5-3D. The reactor period is 44 msec, which corresponds to a reactivity insertion of 2.685% delta k/kmore » $. The peak core power in the spatial dynamics simulation is 431 MW, which the point kinetics model over-predicts by 12%. The pulse width at half the maximum power is 0.177 sec. Subtle transient effects are apparent at the beginning insertion in the experimental samples due to the control rod removal. Additional difference due to transient effects are observed in the sample powers and enthalpy. The time dependence of the power coupling factor (PCF) is calculated for the various fuel stacks of the Multi-SERTTA vehicle. Sample temperatures in excess of 3100 K, the melting point UO$$_2$$, are computed with the adiabatic heat transfer model. The planned shaped-transient might introduce additional effects that cannot be predicted with PRK models. Future modeling will be focused on the shaped-transient by improving the control rod models in MAMMOTH and adding the BISON thermo-elastic models and thermal-fluids heat transfer.« less

  14. Terrain modeling for microwave landing system

    NASA Technical Reports Server (NTRS)

    Poulose, M. M.

    1991-01-01

    A powerful analytical approach for evaluating the terrain effects on a microwave landing system (MLS) is presented. The approach combines a multiplate model with a powerful and exhaustive ray tracing technique and an accurate formulation for estimating the electromagnetic fields due to the antenna array in the presence of terrain. Both uniform theory of diffraction (UTD) and impedance UTD techniques have been employed to evaluate these fields. Innovative techniques are introduced at each stage to make the model versatile to handle most general terrain contours and also to reduce the computational requirement to a minimum. The model is applied to several terrain geometries, and the results are discussed.

  15. RAVEN: a GUI and an Artificial Intelligence Engine in a Dynamic PRA Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. Rabiti; D. Mandelli; A. Alfonsi

    Increases in computational power and pressure for more accurate simulations and estimations of accident scenario consequences are driving the need for Dynamic Probabilistic Risk Assessment (PRA) [1] of very complex models. While more sophisticated algorithms and computational power address the back end of this challenge, the front end is still handled by engineers that need to extract meaningful information from the large amount of data and build these complex models. Compounding this problem is the difficulty in knowledge transfer and retention, and the increasing speed of software development. The above-described issues would have negatively impacted deployment of the new highmore » fidelity plant simulator RELAP-7 (Reactor Excursion and Leak Analysis Program) at Idaho National Laboratory. Therefore, RAVEN that was initially focused to be the plant controller for RELAP-7 will help mitigate future RELAP-7 software engineering risks. In order to accomplish this task, Reactor Analysis and Virtual Control Environment (RAVEN) has been designed to provide an easy to use Graphical User Interface (GUI) for building plant models and to leverage artificial intelligence algorithms in order to reduce computational time, improve results, and help the user to identify the behavioral pattern of the Nuclear Power Plants (NPPs). In this paper we will present the GUI implementation and its current capability status. We will also introduce the support vector machine algorithms and show our evaluation of their potentiality in increasing the accuracy and reducing the computational costs of PRA analysis. In this evaluation we will refer to preliminary studies performed under the Risk Informed Safety Margins Characterization (RISMC) project of the Light Water Reactors Sustainability (LWRS) campaign [3]. RISMC simulation needs and algorithm testing are currently used as a guidance to prioritize RAVEN developments relevant to PRA.« less

  16. Computer aided design of monolithic microwave and millimeter wave integrated circuits and subsystems

    NASA Astrophysics Data System (ADS)

    Ku, Walter H.

    1989-05-01

    The objectives of this research are to develop analytical and computer aided design techniques for monolithic microwave and millimeter wave integrated circuits (MMIC and MIMIC) and subsystems and to design and fabricate those ICs. Emphasis was placed on heterojunction-based devices, especially the High Electron Mobility Transition (HEMT), for both low noise and medium power microwave and millimeter wave applications. Circuits to be considered include monolithic low noise amplifiers, power amplifiers, and distributed and feedback amplifiers. Interactive computer aided design programs were developed, which include large signal models of InP MISFETs and InGaAs HEMTs. Further, a new unconstrained optimization algorithm POSM was developed and implemented in the general Analysis and Design program for Integrated Circuit (ADIC) for assistance in the design of largesignal nonlinear circuits.

  17. Eastern Wind Data Set | Grid Modernization | NREL

    Science.gov Websites

    cell was computed by combining these data sets with a composite turbine power curve. Wind power plants wind speed at the site. Adjustments were made for model biases, wake losses, wind gusts, turbine and conversion was also updated to better reflect future wind turbine technology. The 12-hour discontinuity was

  18. Bionic Running for Unilateral Transtibial Military Amputees

    DTIC Science & Technology

    2010-01-01

    Bellman, R., 2010, “An Active Ankle-Foot Prosthesis With Biomechanical Energy Regeneration”, Transactions of the ASME Journal...Lefeber, D., 2008, “A Biomechanical Transtibial Prosthesis Powered by Pleated Pneumatic Artificial Muscles,” Model Identification and Control, 4, 394- 405. ...Inc., have designed, built, and demonstrated a first of its kind motor powered, single board computer controlled, running prosthesis for military

  19. Automation in the Space Station module power management and distribution Breadboard

    NASA Technical Reports Server (NTRS)

    Walls, Bryan; Lollar, Louis F.

    1990-01-01

    The Space Station Module Power Management and Distribution (SSM/PMAD) Breadboard, located at NASA's Marshall Space Flight Center (MSFC) in Huntsville, Alabama, models the power distribution within a Space Station Freedom Habitation or Laboratory module. Originally designed for 20 kHz ac power, the system is now being converted to high voltage dc power with power levels on a par with those expected for a space station module. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level process consists of fast, simple (from a computing standpoint) switchgear, capable of quickly safing the system. The next level consists of local load center processors called Lowest Level Processors (LLP's). These LLP's execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. The level above the LLP's contains a Communication and Algorithmic Controller (CAC) which coordinates communications with the highest level. Finally, at this highest level, three cooperating Artificial Intelligence (AI) systems manage load prioritization, load scheduling, load shedding, and fault recovery and management. The system provides an excellent venue for developing and examining advanced automation techniques. The current system and the plans for its future are examined.

  20. Project Photofly: New 3d Modeling Online Web Service (case Studies and Assessments)

    NASA Astrophysics Data System (ADS)

    Abate, D.; Furini, G.; Migliori, S.; Pierattini, S.

    2011-09-01

    During summer 2010, Autodesk has released a still ongoing project called Project Photofly, freely downloadable from AutodeskLab web site until August 1 2011. Project Photofly based on computer-vision and photogrammetric principles, exploiting the power of cloud computing, is a web service able to convert collections of photographs into 3D models. Aim of our research was to evaluate the Project Photofly, through different case studies, for 3D modeling of cultural heritage monuments and objects, mostly to identify for which goals and objects it is suitable. The automatic approach will be mainly analyzed.

Top