NASA Technical Reports Server (NTRS)
Denning, Peter J.
1990-01-01
Strong artificial intelligence claims that conscious thought can arise in computers containing the right algorithms even though none of the programs or components of those computers understand which is going on. As proof, it asserts that brains are finite webs of neurons, each with a definite function governed by the laws of physics; this web has a set of equations that can be solved (or simulated) by a sufficiently powerful computer. Strong AI claims the Turing test as a criterion of success. A recent debate in Scientific American concludes that the Turing test is not sufficient, but leaves intact the underlying premise that thought is a computable process. The recent book by Roger Penrose, however, offers a sharp challenge, arguing that the laws of quantum physics may govern mental processes and that these laws may not be computable. In every area of mathematics and physics, Penrose finds evidence of nonalgorithmic human activity and concludes that mental processes are inherently more powerful than computational processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpov, A. S.
2013-01-15
A computer procedure for simulating magnetization-controlled dc shunt reactors is described, which enables the electromagnetic transients in electric power systems to be calculated. It is shown that, by taking technically simple measures in the control system, one can obtain high-speed reactors sufficient for many purposes, and dispense with the use of high-power devices for compensating higher harmonic components.
Computer Problem-Solving Coaches for Introductory Physics: Design and Usability Studies
ERIC Educational Resources Information Center
Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew
2016-01-01
The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how…
DET/MPS - The GSFC Energy Balance Programs
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
Direct Energy Transfer (DET) and MultiMission Spacecraft Modular Power System (MPS) computer programs perform mathematical modeling and simulation to aid in design and analysis of DET and MPS spacecraft power system performance in order to determine energy balance of subsystem. DET spacecraft power system feeds output of solar photovoltaic array and nickel cadmium batteries directly to spacecraft bus. MPS system, Standard Power Regulator Unit (SPRU) utilized to operate array at array's peak power point. DET and MPS perform minute-by-minute simulation of performance of power system. Results of simulation focus mainly on output of solar array and characteristics of batteries. Both packages limited in terms of orbital mechanics, they have sufficient capability to calculate data on eclipses and performance of arrays for circular or near-circular orbits. DET and MPS written in FORTRAN-77 with some VAX FORTRAN-type extensions. Both available in three versions: GSC-13374, for DEC VAX-series computers running VMS. GSC-13443, for UNIX-based computers. GSC-13444, for Apple Macintosh computers.
System and Method for High-Speed Data Recording
NASA Technical Reports Server (NTRS)
Taveniku, Mikael B. (Inventor)
2017-01-01
A system and method for high speed data recording includes a control computer and a disk pack unit. The disk pack is provided within a shell that provides handling and protection for the disk packs. The disk pack unit provides cooling of the disks and connection for power and disk signaling. A standard connection is provided between the control computer and the disk pack unit. The disk pack units are self sufficient and able to connect to any computer. Multiple disk packs are connected simultaneously to the system, so that one disk pack can be active while one or more disk packs are inactive. To control for power surges, the power to each disk pack is controlled programmatically for the group of disks in a disk pack.
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.
A note on the self-similar solutions to the spontaneous fragmentation equation
NASA Astrophysics Data System (ADS)
Breschi, Giancarlo; Fontelos, Marco A.
2017-05-01
We provide a method to compute self-similar solutions for various fragmentation equations and use it to compute their asymptotic behaviours. Our procedure is applied to specific cases: (i) the case of mitosis, where fragmentation results into two identical fragments, (ii) fragmentation limited to the formation of sufficiently large fragments, and (iii) processes with fragmentation kernel presenting a power-like behaviour.
NASA Technical Reports Server (NTRS)
Kazanas, Demosthenes; Fukumura, K.
2009-01-01
We present detailed computations of photon orbits emitted by flares at the ISCO of accretion disks around rotating black holes. We show that for sufficiently large spin parameter, i.e. $a > 0.94 M$, following a flare at ISCO, a sufficient number of photons arrive at an observer after multiple orbits around the black hole, to produce an "photon echo" of constant lag, i.e. independent of the relative phase between the black hole and the observer, of $\\Delta T \\simeq 14 M$. This constant time delay, then, leads to the presence of a QPO in the source power spectrum at a frequency $\
Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus
2016-05-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.
Experience of public procurement of Open Compute servers
NASA Astrophysics Data System (ADS)
Bärring, Olof; Guerri, Marco; Bonfillou, Eric; Valsan, Liviu; Grigore, Alexandru; Dore, Vincent; Gentit, Alain; Clement, Benoît; Grossir, Anthony
2015-12-01
The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).
The Challenge of Computer Furniture.
ERIC Educational Resources Information Center
Dolan, Thomas G.
2003-01-01
Explains that classrooms and school furniture were built for a different era and often do not have sufficient power for technology, discussing what is needed to support modern technology in education. One solution involves modular cabling and furniture that is capable of being rearranged. Currently, there are no comprehensive standards from which…
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
Advanced reliability modeling of fault-tolerant computer-based systems
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1982-01-01
Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.
Using SRAM Based FPGAs for Power-Aware High Performance Wireless Sensor Networks
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today’s applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements. PMID:22736971
Using SRAM based FPGAs for power-aware high performance wireless sensor networks.
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today's applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements.
Operations research investigations of satellite power stations
NASA Technical Reports Server (NTRS)
Cole, J. W.; Ballard, J. L.
1976-01-01
A systems model reflecting the design concepts of Satellite Power Stations (SPS) was developed. The model is of sufficient scope to include the interrelationships of the following major design parameters: the transportation to and between orbits; assembly of the SPS; and maintenance of the SPS. The systems model is composed of a set of equations that are nonlinear with respect to the system parameters and decision variables. The model determines a figure of merit from which alternative concepts concerning transportation, assembly, and maintenance of satellite power stations are studied. A hybrid optimization model was developed to optimize the system's decision variables. The optimization model consists of a random search procedure and the optimal-steepest descent method. A FORTRAN computer program was developed to enable the user to optimize nonlinear functions using the model. Specifically, the computer program was used to optimize Satellite Power Station system components.
Markov chain algorithms: a template for building future robust low-power systems
Deka, Biplab; Birklykke, Alex A.; Duwe, Henry; Mansinghka, Vikash K.; Kumar, Rakesh
2014-01-01
Although computational systems are looking towards post CMOS devices in the pursuit of lower power, the expected inherent unreliability of such devices makes it difficult to design robust systems without additional power overheads for guaranteeing robustness. As such, algorithmic structures with inherent ability to tolerate computational errors are of significant interest. We propose to cast applications as stochastic algorithms based on Markov chains (MCs) as such algorithms are both sufficiently general and tolerant to transition errors. We show with four example applications—Boolean satisfiability, sorting, low-density parity-check decoding and clustering—how applications can be cast as MC algorithms. Using algorithmic fault injection techniques, we demonstrate the robustness of these implementations to transition errors with high error rates. Based on these results, we make a case for using MCs as an algorithmic template for future robust low-power systems. PMID:24842030
Are X-rays the key to integrated computational materials engineering?
Ice, Gene E.
2015-11-01
The ultimate dream of materials science is to predict materials behavior from composition and processing history. Owing to the growing power of computers, this long-time dream has recently found expression through worldwide excitement in a number of computation-based thrusts: integrated computational materials engineering, materials by design, computational materials design, three-dimensional materials physics and mesoscale physics. However, real materials have important crystallographic structures at multiple length scales, which evolve during processing and in service. Moreover, real materials properties can depend on the extreme tails in their structural and chemical distributions. This makes it critical to map structural distributions with sufficient resolutionmore » to resolve small structures and with sufficient statistics to capture the tails of distributions. For two-dimensional materials, there are high-resolution nondestructive probes of surface and near-surface structures with atomic or near-atomic resolution that can provide detailed structural, chemical and functional distributions over important length scales. Furthermore, there are no nondestructive three-dimensional probes with atomic resolution over the multiple length scales needed to understand most materials.« less
Computational Study of Low-Temperature Catalytic C-C Bond Activation of Alkanes for Portable Power
2013-06-05
inhibiting the reaction. We found that Fluorinated phosphines are sufficiently π-accepting to satisfy this role. In our next step, we wanted to determine...of butane by Sen’s catalyst, Chepaikin et al. [5] proposed that CH cleavage occurs first. But the resulting catalyst fragment “X” is so electrophilic
Capacity Adequacy and Revenue Sufficiency in Electricity Markets With Wind Power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levin, Todd; Botterud, Audun
2015-05-01
We present a computationally efficient mixed-integer program (MIP) that determines optimal generator expansion decisions, as well as periodic unit commitment and dispatch. The model is applied to analyze the impact of increasing wind power capacity on the optimal generation mix and the profitability of thermal generators. In a case study, we find that increasing wind penetration reduces energy prices while the prices for operating reserves increase. Moreover, scarcity pricing for operating reserves through reserve shortfall penalties significantly impacts the prices and profitability of thermal generators. Without scarcity pricing, no thermal units are profitable, however scarcity pricing can ensure profitability formore » peaking units at high wind penetration levels. Capacity payments can also ensure profitability, but the payments required for baseload units to break even increase with the amount of wind power. The results indicate that baseload units are most likely to experience revenue sufficiency problems when wind penetration increases and new baseload units are only developed when natural gas prices are high and wind penetration is low.« less
Mining protein-protein interaction networks: denoising effects
NASA Astrophysics Data System (ADS)
Marras, Elisabetta; Capobianco, Enrico
2009-01-01
A typical instrument to pursue analysis in complex network studies is the analysis of the statistical distributions. They are usually computed for measures which characterize network topology, and are aimed at capturing both structural and dynamics aspects. Protein-protein interaction networks (PPIN) have also been studied through several measures. It is in general observed that a power law is expected to characterize scale-free networks. However, mixing the original noise cover with outlying information and other system-dependent fluctuations makes the empirical detection of the power law a difficult task. As a result the uncertainty level increases when looking at the observed sample; in particular, one may wonder whether the computed features may be sufficient to explain the interactome. We then address noise problems by implementing both decomposition and denoising techniques that reduce the impact of factors known to affect the accuracy of power law detection.
Deep Learning in Medical Imaging: General Overview
Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae
2017-01-01
The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging. PMID:28670152
Deep Learning in Medical Imaging: General Overview.
Lee, June-Goo; Jun, Sanghoon; Cho, Young-Won; Lee, Hyunna; Kim, Guk Bae; Seo, Joon Beom; Kim, Namkug
2017-01-01
The artificial neural network (ANN)-a machine learning technique inspired by the human neuronal synapse system-was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.
Precision Parameter Estimation and Machine Learning
NASA Astrophysics Data System (ADS)
Wandelt, Benjamin D.
2008-12-01
I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, George S.; Brown, William Michael
2007-09-01
Techniques for high throughput determinations of interactomes, together with high resolution protein collocalizations maps within organelles and through membranes will soon create a vast resource. With these data, biological descriptions, akin to the high dimensional phase spaces familiar to physicists, will become possible. These descriptions will capture sufficient information to make possible realistic, system-level models of cells. The descriptions and the computational models they enable will require powerful computing techniques. This report is offered as a call to the computational biology community to begin thinking at this scale and as a challenge to develop the required algorithms and codes tomore » make use of the new data.3« less
Voice/Natural Language Interfacing for Robotic Control.
1987-11-01
THIS PAGE REPORT DOCUMENTATION PAGE Is. REPORT SECURITY CLASSIFICATION lb . RESTRICTIVE MARKINGS UNCLASSIFIED 2a. SECURITY CLASSIFICATION AUTHORITY 3...until major computing power can be profitably allocated to the speech recognition process, off-the- shelf units will never have sufficient intelligence to...coordinate transformation for a location, and opening or closing the gripper’s toggles. External to world operations, each joint may be rotated
The gross energy balance of solar active regions
NASA Technical Reports Server (NTRS)
Evans, K. D.; Pye, J. P.; Hutcheon, R. J.; Gerassimenko, M.; Krieger, A. S.; Davis, J. M.; Vesecky, J. F.
1977-01-01
Parker's (1974) model in which sunspots denote regions of increased heat transport from the convection zone is briefly described. The amount of excess mechanically transported power supposed to be delivered to the atmosphere is estimated for a typical active region, and the total radiative power output of the active-region atmosphere is computed. It is found that only a very small fraction (about 0.001) of the sunspot 'missing flux' can be accounted for by radiative emission from the atmosphere above a spot group in the manner suggested by Parker. The power-loss mechanism associated with mass loss to the solar wind is briefly considered and shown not to be sufficient to account for the sunspot missing flux.
Bootstrapping in a language of thought: a formal model of numerical concept learning.
Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D
2012-05-01
In acquiring number words, children exhibit a qualitative leap in which they transition from understanding a few number words, to possessing a rich system of interrelated numerical concepts. We present a computational framework for understanding this inductive leap as the consequence of statistical inference over a sufficiently powerful representational system. We provide an implemented model that is powerful enough to learn number word meanings and other related conceptual systems from naturalistic data. The model shows that bootstrapping can be made computationally and philosophically well-founded as a theory of number learning. Our approach demonstrates how learners may combine core cognitive operations to build sophisticated representations during the course of development, and how this process explains observed developmental patterns in number word learning. Copyright © 2011 Elsevier B.V. All rights reserved.
Design and development of a solar powered mobile laboratory
NASA Astrophysics Data System (ADS)
Jiao, L.; Simon, A.; Barrera, H.; Acharya, V.; Repke, W.
2016-08-01
This paper describes the design and development of a solar powered mobile laboratory (SPML) system. The SPML provides a mobile platform that schools, universities, and communities can use to give students and staff access to laboratory environments where dedicated laboratories are not available. The lab includes equipment like 3D printers, computers, and soldering stations. The primary power source of the system is solar PV which allows the laboratory to be operated in places where the grid power is not readily available or not sufficient to power all the equipment. The main system components include PV panels, junction box, battery, charge controller, and inverter. Not only is it used to teach students and staff how to use the lab equipment, but it is also a great tool to educate the public about solar PV technologies.
Computer problem-solving coaches for introductory physics: Design and usability studies
NASA Astrophysics Data System (ADS)
Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew
2016-06-01
The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how effective such coaches might be, they will only be useful if they are attractive to students. We describe the design and testing of a set of web-based computer programs that act as personal coaches to students while they practice solving problems from introductory physics. The coaches are designed to supplement regular human instruction, giving students access to effective forms of practice outside class. We present results from large-scale usability tests of the computer coaches and discuss their implications for future versions of the coaches.
Quantum computing on encrypted data
NASA Astrophysics Data System (ADS)
Fisher, K. A. G.; Broadbent, A.; Shalm, L. K.; Yan, Z.; Lavoie, J.; Prevedel, R.; Jennewein, T.; Resch, K. J.
2014-01-01
The ability to perform computations on encrypted data is a powerful tool for protecting privacy. Recently, protocols to achieve this on classical computing systems have been found. Here, we present an efficient solution to the quantum analogue of this problem that enables arbitrary quantum computations to be carried out on encrypted quantum data. We prove that an untrusted server can implement a universal set of quantum gates on encrypted quantum bits (qubits) without learning any information about the inputs, while the client, knowing the decryption key, can easily decrypt the results of the computation. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme on a set of gates sufficient for arbitrary quantum computations. As our protocol requires few extra resources compared with other schemes it can be easily incorporated into the design of future quantum servers. These results will play a key role in enabling the development of secure distributed quantum systems.
Quantum computing on encrypted data.
Fisher, K A G; Broadbent, A; Shalm, L K; Yan, Z; Lavoie, J; Prevedel, R; Jennewein, T; Resch, K J
2014-01-01
The ability to perform computations on encrypted data is a powerful tool for protecting privacy. Recently, protocols to achieve this on classical computing systems have been found. Here, we present an efficient solution to the quantum analogue of this problem that enables arbitrary quantum computations to be carried out on encrypted quantum data. We prove that an untrusted server can implement a universal set of quantum gates on encrypted quantum bits (qubits) without learning any information about the inputs, while the client, knowing the decryption key, can easily decrypt the results of the computation. We experimentally demonstrate, using single photons and linear optics, the encryption and decryption scheme on a set of gates sufficient for arbitrary quantum computations. As our protocol requires few extra resources compared with other schemes it can be easily incorporated into the design of future quantum servers. These results will play a key role in enabling the development of secure distributed quantum systems.
Intelligent redundant actuation system requirements and preliminary system design
NASA Technical Reports Server (NTRS)
Defeo, P.; Geiger, L. J.; Harris, J.
1985-01-01
Several redundant actuation system configurations were designed and demonstrated to satisfy the stringent operational requirements of advanced flight control systems. However, this has been accomplished largely through brute force hardware redundancy, resulting in significantly increased computational requirements on the flight control computers which perform the failure analysis and reconfiguration management. Modern technology now provides powerful, low-cost microprocessors which are effective in performing failure isolation and configuration management at the local actuator level. One such concept, called an Intelligent Redundant Actuation System (IRAS), significantly reduces the flight control computer requirements and performs the local tasks more comprehensively than previously feasible. The requirements and preliminary design of an experimental laboratory system capable of demonstrating the concept and sufficiently flexible to explore a variety of configurations are discussed.
Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics
Dowding, Irene; Haufe, Stefan
2018-01-01
Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885
NASA Astrophysics Data System (ADS)
Showstack, Randy
Following a recent collision, fire, series of computer and power failures, and other mishaps on the Russian space station, Mir, the U.S. Congress held a hearing on September 18 to question the safety of American astronauts staying aboard the aging spacecraft.“There has been sufficient evidence put before this hearing to raise doubts about the safety of continued American long-term presence on the Mir,” said House Science Committee Chairman Rep. James Sensenbrenner (R-Wisc.) at the hearing.
An Advanced Framework for Improving Situational Awareness in Electric Power Grid Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Huang, Zhenyu; Zhou, Ning
With the deployment of new smart grid technologies and the penetration of renewable energy in power systems, significant uncertainty and variability is being introduced into power grid operation. Traditionally, the Energy Management System (EMS) operates the power grid in a deterministic mode, and thus will not be sufficient for the future control center in a stochastic environment with faster dynamics. One of the main challenges is to improve situational awareness. This paper reviews the current status of power grid operation and presents a vision of improving wide-area situational awareness for a future control center. An advanced framework, consisting of parallelmore » state estimation, state prediction, parallel contingency selection, parallel contingency analysis, and advanced visual analytics, is proposed to provide capabilities needed for better decision support by utilizing high performance computing (HPC) techniques and advanced visual analytic techniques. Research results are presented to support the proposed vision and framework.« less
Parallel Calculations in LS-DYNA
NASA Astrophysics Data System (ADS)
Vartanovich Mkrtychev, Oleg; Aleksandrovich Reshetov, Andrey
2017-11-01
Nowadays, structural mechanics exhibits a trend towards numeric solutions being found for increasingly extensive and detailed tasks, which requires that capacities of computing systems be enhanced. Such enhancement can be achieved by different means. E.g., in case a computing system is represented by a workstation, its components can be replaced and/or extended (CPU, memory etc.). In essence, such modification eventually entails replacement of the entire workstation, i.e. replacement of certain components necessitates exchange of others (faster CPUs and memory devices require buses with higher throughput etc.). Special consideration must be given to the capabilities of modern video cards. They constitute powerful computing systems capable of running data processing in parallel. Interestingly, the tools originally designed to render high-performance graphics can be applied for solving problems not immediately related to graphics (CUDA, OpenCL, Shaders etc.). However, not all software suites utilize video cards’ capacities. Another way to increase capacity of a computing system is to implement a cluster architecture: to add cluster nodes (workstations) and to increase the network communication speed between the nodes. The advantage of this approach is extensive growth due to which a quite powerful system can be obtained by combining not particularly powerful nodes. Moreover, separate nodes may possess different capacities. This paper considers the use of a clustered computing system for solving problems of structural mechanics with LS-DYNA software. To establish a range of dependencies a mere 2-node cluster has proven sufficient.
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
Quantum Monte Carlo: Faster, More Reliable, And More Accurate
NASA Astrophysics Data System (ADS)
Anderson, Amos Gerald
2010-06-01
The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our combination of Generalized Valence Bond wavefunctions, improved correlation functions, and stabilized weighting techniques for calculations run on graphics cards, represents a new way for using Quantum Monte Carlo to study arbitrarily sized molecules.
Conroy, M.J.; Samuel, M.D.; White, Joanne C.
1995-01-01
Statistical power (and conversely, Type II error) is often ignored by biologists. Power is important to consider in the design of studies, to ensure that sufficient resources are allocated to address a hypothesis under examination. Deter- mining appropriate sample size when designing experiments or calculating power for a statistical test requires an investigator to consider the importance of making incorrect conclusions about the experimental hypothesis and the biological importance of the alternative hypothesis (or the biological effect size researchers are attempting to measure). Poorly designed studies frequently provide results that are at best equivocal, and do little to advance science or assist in decision making. Completed studies that fail to reject Ho should consider power and the related probability of a Type II error in the interpretation of results, particularly when implicit or explicit acceptance of Ho is used to support a biological hypothesis or management decision. Investigators must consider the biological question they wish to answer (Tacha et al. 1982) and assess power on the basis of biologically significant differences (Taylor and Gerrodette 1993). Power calculations are somewhat subjective, because the author must specify either f or the minimum difference that is biologically important. Biologists may have different ideas about what values are appropriate. While determining biological significance is of central importance in power analysis, it is also an issue of importance in wildlife science. Procedures, references, and computer software to compute power are accessible; therefore, authors should consider power. We welcome comments or suggestions on this subject.
Characterizing and Optimizing the Performance of the MAESTRO 49-Core Processor
2014-03-27
process large volumes of data, it is necessary during testing to vary the dimensions of the inbound data matrix to determine what effect this has on the...needed that can process the extra data these systems seek to collect. However, the space environment presents a number of threats, such as ambient or...induced faults, and that also have sufficient computational power to handle the large flow of data they encounter. This research investigates one
Laser Boron Fusion Reactor With Picosecond Petawatt Block Ignition
NASA Astrophysics Data System (ADS)
Hora, Heinrich; Eliezer, Shalom; Wang, Jiaxiang; Korn, Georg; Nissim, Noaz; Xu, Yan-Xia; Lalousis, Paraskevas; Kirchhoff, Gotz J.; Miley, George H.
2018-05-01
For developing a laser boron fusion reactor driven by picosecond laser pulses of more than 30 petawatts power, advances are reported about computations for the plasma block generation by the dielectric explosion of the interaction. Further results are about the direct drive ignition mechanism by a single laser pulse without the problems of spherical irradiation. For the sufficiently large stopping lengths of the generated alpha particles in the plasma results from other projects can be used.
A Model of Human Cognitive Behavior in Writing Code for Computer Programs. Volume 1
1975-05-01
nearly all programming languages, each line of code actually involves a great many decisions - basic statement types, variable and expression choices...labels, etc. - and any heuristic which evaluates code on the basis of a single decision is not likely to have sufficient power. Only the use of plans...recalculated in the following line because It was needed again. The second reason is that there are some decisions about the structure of a program
El Silencio: a rural community of learners and media creators.
Urrea, Claudia
2010-01-01
A one-to-one learning environment, where each participating student and the teacher use a laptop computer, provides an invaluable opportunity for rethinking learning and studying the ways in which children can program computers and learn to think about their own thinking styles and become epistemologists. This article presents a study done in a rural school in Costa Rica in which students used computers to create media. Three important components of the work are described: (1) student-owned technology that can accompany students as they interact at home and in the broader community, (2) activities that are designed with sufficient scope to encourage the appropriation of powerful ideas, and (3) teacher engagement in activity design with simultaneous support from a knowledge network of local and international colleagues and mentors.
NASA Technical Reports Server (NTRS)
El-Genk, Mohamed S.; Morley, Nicholas J.
1991-01-01
Multiyear civilian manned missions to explore the surface of Mars are thought by NASA to be possible early in the next century. Expeditions to Mars, as well as permanent bases, are envisioned to require enhanced piloted vehicles to conduct science and exploration activities. Piloted rovers, with 30 kWe user net power (for drilling, sampling and sample analysis, onboard computer and computer instrumentation, vehicle thermal management, and astronaut life support systems) in addition to mobility are being considered. The rover design, for this study, included a four car train type vehicle complete with a hybrid solar photovoltaic/regenerative fuel cell auxiliary power system (APS). This system was designed to power the primary control vehicle. The APS supplies life support power for four astronauts and a limited degree of mobility allowing the primary control vehicle to limp back to either a permanent base or an accent vehicle. The results showed that the APS described above, with a mass of 667 kg, was sufficient to provide live support power and a top speed of five km/h for 6 hours per day. It was also seen that the factors that had the largest effect on the APS mass were the life support power, the number of astronauts, and the PV cell efficiency. The topics covered include: (1) power system options; (2) rover layout and design; (3) parametric analysis of total mass and power requirements for a manned Mars rover; (4) radiation shield design; and (5) energy conversion systems.
Applications of high power lasers. [using reflection holograms for machining and surface treatment
NASA Technical Reports Server (NTRS)
Angus, J. C.
1979-01-01
The use of computer generated, reflection holograms in conjunction with high power lasers for precision machining of metals and ceramics was investigated. The Reflection holograms which were developed and made to work at both optical wavelength (He-Ne, 6328 A) and infrared (CO2, 10.6) meet the primary practical requirement of ruggedness and are relatively economical and simple to fabricate. The technology is sufficiently advanced now so that reflection holography could indeed be used as a practical manufacturing device in certain applications requiring low power densities. However, the present holograms are energy inefficient and much of the laser power is lost in the zero order spot and higher diffraction orders. Improvements of laser machining over conventional methods are discussed and addition applications are listed. Possible uses in the electronics industry include drilling holes in printed circuit boards making soldered connections, and resistor trimming.
Experimental power density distribution benchmark in the TRIGA Mark II reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snoj, L.; Stancar, Z.; Radulovic, V.
2012-07-01
In order to improve the power calibration process and to benchmark the existing computational model of the TRIGA Mark II reactor at the Josef Stefan Inst. (JSI), a bilateral project was started as part of the agreement between the French Commissariat a l'energie atomique et aux energies alternatives (CEA) and the Ministry of higher education, science and technology of Slovenia. One of the objectives of the project was to analyze and improve the power calibration process of the JSI TRIGA reactor (procedural improvement and uncertainty reduction) by using absolutely calibrated CEA fission chambers (FCs). This is one of the fewmore » available power density distribution benchmarks for testing not only the fission rate distribution but also the absolute values of the fission rates. Our preliminary calculations indicate that the total experimental uncertainty of the measured reaction rate is sufficiently low that the experiments could be considered as benchmark experiments. (authors)« less
Using a Cray Y-MP as an array processor for a RISC Workstation
NASA Technical Reports Server (NTRS)
Lamaster, Hugh; Rogallo, Sarah J.
1992-01-01
As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described.
Load Balancing Unstructured Adaptive Grids for CFD Problems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid
1996-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.
1988-07-01
optical coatings.[lj In * single and multilayer anatase TiO 2 coatings, sufficiently intense pulsed laser irradiation at 532 nm led to observation of...temperatures of pulsed laser - irradiated anatase coatings have been computed from Stokes/anti-Stokes band intensity ratios at zero time delay as a function of...Adar Time-Resolved Temperature Determinations from Raman Scattering of TiO Coatings During Pulsed Laser Irradiation
Design and control of a macro-micro robot for precise force applications
NASA Technical Reports Server (NTRS)
Wang, Yulun; Mangaser, Amante; Laby, Keith; Jordan, Steve; Wilson, Jeff
1993-01-01
Creating a robot which can delicately interact with its environment has been the goal of much research. Primarily two difficulties have made this goal hard to attain. The execution of control strategies which enable precise force manipulations are difficult to implement in real time because such algorithms have been too computationally complex for available controllers. Also, a robot mechanism which can quickly and precisely execute a force command is difficult to design. Actuation joints must be sufficiently stiff, frictionless, and lightweight so that desired torques can be accurately applied. This paper describes a robotic system which is capable of delicate manipulations. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system. Delicate force tasks such as polishing, finishing, cleaning, and deburring, are the target applications of the robot.
Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter
2015-01-01
Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438
3-d finite element model development for biomechanics: a software demonstration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollerbach, K.; Hollister, A.M.; Ashby, E.
1997-03-01
Finite element analysis is becoming an increasingly important part of biomechanics and orthopedic research, as computational resources become more powerful, and data handling algorithms become more sophisticated. Until recently, tools with sufficient power did not exist or were not accessible to adequately model complicated, three-dimensional, nonlinear biomechanical systems. In the past, finite element analyses in biomechanics have often been limited to two-dimensional approaches, linear analyses, or simulations of single tissue types. Today, we have the resources to model fully three-dimensional, nonlinear, multi-tissue, and even multi-joint systems. The authors will present the process of developing these kinds of finite element models,more » using human hand and knee examples, and will demonstrate their software tools.« less
Rearranging Pionless Effective Field Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin Savage; Silas Beane
2001-11-19
We point out a redundancy in the operator structure of the pionless effective field theory which dramatically simplifies computations. This redundancy is best exploited by using dibaryon fields as fundamental degrees of freedom. In turn, this suggests a new power counting scheme which sums range corrections to all orders. We explore this method with a few simple observables: the deuteron charge form factor, n p -> d gamma, and Compton scattering from the deuteron. Higher dimension operators involving electroweak gauge fields are not renormalized by the s-wave strong interactions, and therefore do not scale with inverse powers of the renormalizationmore » scale. Thus, naive dimensional analysis of these operators is sufficient to estimate their contribution to a given process.« less
An Integrative Account of Constraints on Cross-Situational Learning
Yurovsky, Daniel; Frank, Michael C.
2015-01-01
Word-object co-occurrence statistics are a powerful information source for vocabulary learning, but there is considerable debate about how learners actually use them. While some theories hold that learners accumulate graded, statistical evidence about multiple referents for each word, others suggest that they track only a single candidate referent. In two large-scale experiments, we show that neither account is sufficient: Cross-situational learning involves elements of both. Further, the empirical data are captured by a computational model that formalizes how memory and attention interact with co-occurrence tracking. Together, the data and model unify opposing positions in a complex debate and underscore the value of understanding the interaction between computational and algorithmic levels of explanation. PMID:26302052
NASA Astrophysics Data System (ADS)
Moores, Brad A.; Sletten, Lucas R.; Viennot, Jeremie; Lehnert, K. W.
Man-made systems of interacting qubits are a promising and powerful way of exploring many-body spin physics beyond classical computation. Although transmon qubits are perhaps the most advanced quantum computing technology, building a system of such qubits designed to emulate a system of many interacting spins is hindered by the mismatch of scales between the transmons and the electromagnetic modes that couple them. We propose a strategy to overcome this mismatch by using surface acoustic waves, which couple to qubits piezoelectrically and have micron wavelengths at GHz frequencies. In this talk, we will present characterizations of transmon qubits fabricated on a piezoelectric material, and show that their coherence properties are sufficient to explore acoustically mediated qubit interactions.
Precision calculations of the cosmic shear power spectrum projection
NASA Astrophysics Data System (ADS)
Kilbinger, Martin; Heymans, Catherine; Asgari, Marika; Joudaki, Shahab; Schneider, Peter; Simon, Patrick; Van Waerbeke, Ludovic; Harnois-Déraps, Joachim; Hildebrandt, Hendrik; Köhlinger, Fabian; Kuijken, Konrad; Viola, Massimo
2017-12-01
We compute the spherical-sky weak-lensing power spectrum of the shear and convergence. We discuss various approximations, such as flat-sky, and first- and second-order Limber equations for the projection. We find that the impact of adopting these approximations is negligible when constraining cosmological parameters from current weak-lensing surveys. This is demonstrated using data from the Canada-France-Hawaii Telescope Lensing Survey. We find that the reported tension with Planck cosmic microwave background temperature anisotropy results cannot be alleviated. For future large-scale surveys with unprecedented precision, we show that the spherical second-order Limber approximation will provide sufficient accuracy. In this case, the cosmic-shear power spectrum is shown to be in agreement with the full projection at the sub-percent level for ℓ > 3, with the corresponding errors an order of magnitude below cosmic variance for all ℓ. When computing the two-point shear correlation function, we show that the flat-sky fast Hankel transformation results in errors below two percent compared to the full spherical transformation. In the spirit of reproducible research, our numerical implementation of all approximations and the full projection are publicly available within the package NICAEA at http://www.cosmostat.org/software/nicaea.
Global simulation of the Czochralski silicon crystal growth in ANSYS FLUENT
NASA Astrophysics Data System (ADS)
Kirpo, Maksims
2013-05-01
Silicon crystals for high efficiency solar cells are produced mainly by the Czochralski (CZ) crystal growth method. Computer simulations of the CZ process established themselves as a basic tool for optimization of the growth process which allows to reduce production costs keeping high quality of the crystalline material. The author shows the application of the general Computational Fluid Dynamics (CFD) code ANSYS FLUENT to solution of the static two-dimensional (2D) axisymmetric global model of the small industrial furnace for growing of silicon crystals with a diameter of 100 mm. The presented numerical model is self-sufficient and incorporates the most important physical phenomena of the CZ growth process including latent heat generation during crystallization, crystal-melt interface deflection, turbulent heat and mass transport, oxygen transport, etc. The demonstrated approach allows to find the heater power for the specified pulling rate of the crystal but the obtained power values are smaller than those found in the literature for the studied furnace. However, the described approach is successfully verified with the respect to the heater power by its application for the numerical simulations of the real CZ pullers by "Bosch Solar Energy AG".
High Frequency QPOs due to Black Hole Spin
NASA Technical Reports Server (NTRS)
Kazanas, Demos; Fukumura, K.
2009-01-01
We present detailed computations of photon orbits emitted by flares at the innermost stable circular orbit (ISCO) of accretion disks around rotating black holes. We show that for sufficiently large spin parameter, i.e. a > 0.94 M, flare a sufficient number of photons arrive at an observer after multiple orbits around the black hole, to produce an "photon echo" of constant lag, i.e. independent of the relative phase between the black hole and the observer, of T approximates 14 M. This constant time delay, then, leads to a power spectrum with a QPO at a frequency nu approximates 1/14M, even for a totally random ensemble of such flares. Observation of such a QPO will provide incontrovertible evidence for the high spin of the black hole and a very accurate, independent, measurement of its mass.
Design of on-board parallel computer on nano-satellite
NASA Astrophysics Data System (ADS)
You, Zheng; Tian, Hexiang; Yu, Shijie; Meng, Li
2007-11-01
This paper provides one scheme of the on-board parallel computer system designed for the Nano-satellite. Based on the development request that the Nano-satellite should have a small volume, low weight, low power cost, and intelligence, this scheme gets rid of the traditional one-computer system and dual-computer system with endeavor to improve the dependability, capability and intelligence simultaneously. According to the method of integration design, it employs the parallel computer system with shared memory as the main structure, connects the telemetric system, attitude control system, and the payload system by the intelligent bus, designs the management which can deal with the static tasks and dynamic task-scheduling, protect and recover the on-site status and so forth in light of the parallel algorithms, and establishes the fault diagnosis, restoration and system restructure mechanism. It accomplishes an on-board parallel computer system with high dependability, capability and intelligence, a flexible management on hardware resources, an excellent software system, and a high ability in extension, which satisfies with the conception and the tendency of the integration electronic design sufficiently.
Information infrastructure for emergency medical services.
Orthner, Helmuth; Mishra, Ninad; Terndrup, Thomas; Acker, Joseph; Grimes, Gary; Gemmill, Jill; Battles, Marcie
2005-01-01
The pre-hospital emergency medical and public safety information environment is nearing a threshold of significant change. The change is driven in part by several emerging technologies such as secure, high-speed wireless communication in the local and wide area networks (wLAN, 3G), Geographic Information Systems (GIS), Global Positioning Systems (GPS), and powerful handheld computing and communication services, that are of sufficient utility to be more widely adopted. We propose a conceptual model to enable improved clinical decision making in the pre-hospital environment using these change agents.
Combining dynamical decoupling with fault-tolerant quantum computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, Hui Khoon; Preskill, John; Lidar, Daniel A.
2011-07-15
We study how dynamical decoupling (DD) pulse sequences can improve the reliability of quantum computers. We prove upper bounds on the accuracy of DD-protected quantum gates and derive sufficient conditions for DD-protected gates to outperform unprotected gates. Under suitable conditions, fault-tolerant quantum circuits constructed from DD-protected gates can tolerate stronger noise and have a lower overhead cost than fault-tolerant circuits constructed from unprotected gates. Our accuracy estimates depend on the dynamics of the bath that couples to the quantum computer and can be expressed either in terms of the operator norm of the bath's Hamiltonian or in terms of themore » power spectrum of bath correlations; we explain in particular how the performance of recursively generated concatenated pulse sequences can be analyzed from either viewpoint. Our results apply to Hamiltonian noise models with limited spatial correlations.« less
A grid-enabled web service for low-resolution crystal structure refinement.
O'Donovan, Daniel J; Stokes-Rees, Ian; Nam, Yunsun; Blacklow, Stephen C; Schröder, Gunnar F; Brunger, Axel T; Sliz, Piotr
2012-03-01
Deformable elastic network (DEN) restraints have proved to be a powerful tool for refining structures from low-resolution X-ray crystallographic data sets. Unfortunately, optimal refinement using DEN restraints requires extensive calculations and is often hindered by a lack of access to sufficient computational resources. The DEN web service presented here intends to provide structural biologists with access to resources for running computationally intensive DEN refinements in parallel on the Open Science Grid, the US cyberinfrastructure. Access to the grid is provided through a simple and intuitive web interface integrated into the SBGrid Science Portal. Using this portal, refinements combined with full parameter optimization that would take many thousands of hours on standard computational resources can now be completed in several hours. An example of the successful application of DEN restraints to the human Notch1 transcriptional complex using the grid resource, and summaries of all submitted refinements, are presented as justification.
Majdecka, Dominika; Draminska, Sylwia; Janusek, Dariusz; Krysinski, Paweł; Bilewicz, Renata
2018-04-15
In this work, we propose an integrated self-powered sensing system, driven by a hybrid biofuel cell (HBFC) with carbon paper discs coated with multiwalled carbon nanotubes. The sensing system has a biocathode made from laccase or bilirubin oxidase, and the anode is made from a zinc plate. The system includes a dedicated custom-built electronic control unit for the detection of oxygen and catechol analytes, which are central to medical and environmental applications. Both the HBFC and sensors, operate in a mediatorless direct electron transfer mode. The measured characteristics of the HBFC with externally applied resistance included the power-time dependencies under flow cell conditions, the sensors performance (evaluated by cyclic voltammetry), and chronoamperometry. The HBFC is integrated with analytical devices and operating in a pulse mode form long-run monitoring experiments. The HBFC generated sufficient power for wireless data transmission to a local computer. Copyright © 2017 Elsevier B.V. All rights reserved.
Vascular surgical data registries for small computers.
Kaufman, J L; Rosenberg, N
1984-08-01
Recent designs for computer-based vascular surgical registries and clinical data bases have employed large centralized systems with formal programming and mass storage. Small computers, of the types created for office use or for word processing, now contain sufficient speed and memory storage capacity to allow construction of decentralized office-based registries. Using a standardized dictionary of terms and a method of data organization adapted to word processing, we have created a new vascular surgery data registry, "VASREG." Data files are organized without programming, and a limited number of powerful logical statements in English are used for sorting. The capacity is 25,000 records with current inexpensive memory technology. VASREG is adaptable to computers made by a variety of manufacturers, and interface programs are available for conversion of the word processor formated registry data into forms suitable for analysis by programs written in a standard programming language. This is a low-cost clinical data registry available to any physician. With a standardized dictionary, preparation of regional and national statistical summaries may be facilitated.
A new tool called DISSECT for analysing large genomic data sets using a Big Data approach
Canela-Xandri, Oriol; Law, Andy; Gray, Alan; Woolliams, John A.; Tenesa, Albert
2015-01-01
Large-scale genetic and genomic data are increasingly available and the major bottleneck in their analysis is a lack of sufficiently scalable computational tools. To address this problem in the context of complex traits analysis, we present DISSECT. DISSECT is a new and freely available software that is able to exploit the distributed-memory parallel computational architectures of compute clusters, to perform a wide range of genomic and epidemiologic analyses, which currently can only be carried out on reduced sample sizes or under restricted conditions. We demonstrate the usefulness of our new tool by addressing the challenge of predicting phenotypes from genotype data in human populations using mixed-linear model analysis. We analyse simulated traits from 470,000 individuals genotyped for 590,004 SNPs in ∼4 h using the combined computational power of 8,400 processor cores. We find that prediction accuracies in excess of 80% of the theoretical maximum could be achieved with large sample sizes. PMID:26657010
NASA Astrophysics Data System (ADS)
Kobayashi, Kenya; Sudo, Masaki; Omura, Ichiro
2018-04-01
Field-plate trench MOSFETs (FP-MOSFETs), with the features of ultralow on-resistance and very low gate–drain charge, are currently the mainstream of high-performance applications and their advancement is continuing as low-voltage silicon power devices. However, owing to their structure, their output capacitance (C oss), which leads to main power loss, remains to be a problem, especially in megahertz switching. In this study, we propose a structure-based capacitance model of FP-MOSFETs for calculating power loss easily under various conditions. Appropriate equations were modeled for C oss curves as three divided components. Output charge (Q oss) and stored energy (E oss) that were calculated using the model corresponded well to technology computer-aided design (TCAD) simulation, and we validated the accuracy of the model quantitatively. In the power loss analysis of FP-MOSFETs, turn-off loss was sufficiently suppressed, however, mainly Q oss loss increased depending on switching frequency. This analysis reveals that Q oss may become a significant issue in next-generation high-efficiency FP-MOSFETs.
A simple algorithm to compute the peak power output of GaAs/Ge solar cells on the Martian surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glueck, P.R.; Bahrami, K.A.
1995-12-31
The Jet Propulsion Laboratory`s (JPL`s) Mars Pathfinder Project will deploy a robotic ``microrover`` on the surface of Mars in the summer of 1997. This vehicle will derive primary power from a GaAs/Ge solar array during the day and will ``sleep`` at night. This strategy requires that the rover be able to (1) determine when it is necessary to save the contents of volatile memory late in the afternoon and (2) determine when sufficient power is available to resume operations in the morning. An algorithm was developed that estimates the peak power point of the solar array from the solar arraymore » short-circuit current and temperature telemetry, and provides functional redundancy for both measurements using the open-circuit voltage telemetry. The algorithm minimizes vehicle processing and memory utilization by using linear equations instead of look-up tables to estimate peak power with very little loss in accuracy. This paper describes the method used to obtain the algorithm and presents the detailed algorithm design.« less
Design of a dataway processor for a parallel image signal processing system
NASA Astrophysics Data System (ADS)
Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu
1995-04-01
Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.
Implementing a High-Assurance Smart-Card OS
NASA Astrophysics Data System (ADS)
Karger, Paul A.; Toll, David C.; Palmer, Elaine R.; McIntosh, Suzanne K.; Weber, Samuel; Edwards, Jonathan W.
Building a high-assurance, secure operating system for memory constrained systems, such as smart cards, introduces many challenges. The increasing power of smart cards has made their use feasible in applications such as electronic passports, military and public sector identification cards, and cell-phone based financial and entertainment applications. Such applications require a secure environment, which can only be provided with sufficient hardware and a secure operating system. We argue that smart cards pose additional security challenges when compared to traditional computer platforms. We discuss our design for a secure smart card operating system, named Caernarvon, and show that it addresses these challenges, which include secure application download, protection of cryptographic functions from malicious applications, resolution of covert channels, and assurance of both security and data integrity in the face of arbitrary power losses.
A State of the Art Survey of Fraud Detection Technology
NASA Astrophysics Data System (ADS)
Flegel, Ulrich; Vayssière, Julien; Bitz, Gunter
With the introduction of IT to conductbusiness we accepted the loss of a human control step.For this reason, the introductionof newIT systemswas accompanied by the development of the authorization concept. But since, in reality, there is no such thing as 100 per cent security; auditors are commissioned to examine all transactions for misconduct. Since the data exists in digital form already, it makes sense to use computer-based processes to analyse it. Such processes allow the auditor to carry out extensive checks within an acceptable timeframe and with reasonable effort. Once the algorithm has been defined, it only takes sufficient computing power to evaluate larger quantities of data. This contribution presents the state of the art for IT-based data analysis processes that can be used to identify fraudulent activities.
High-frequency CAD-based scattering model: SERMAT
NASA Astrophysics Data System (ADS)
Goupil, D.; Boutillier, M.
1991-09-01
Specifications for an industrial radar cross section (RCS) calculation code are given: it must be able to exchange data with many computer aided design (CAD) systems, it must be fast, and it must have powerful graphic tools. Classical physical optics (PO) and equivalent currents (EC) techniques have proven their efficiency on simple objects for a long time. Difficult geometric problems occur when objects with very complex shapes have to be computed. Only a specific geometric code can solve these problems. We have established that, once these problems have been solved: (1) PO and EC give good results on complex objects of large size compared to wavelength; and (2) the implementation of these objects in a software package (SERMAT) allows fast and sufficiently precise domain RCS calculations to meet industry requirements in the domain of stealth.
31 CFR 29.342 - Computed annuity exceeds the statutory maximum.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) In cases in which the total computed annuity exceeds the statutory maximum: (1) Federal Benefit... sufficient service as of June 30, 1997, to reach the statutory maximum benefit, but has sufficient service at...
Modelling of mitigation of the power divertor loading for the EU DEMO through Ar injection
NASA Astrophysics Data System (ADS)
Subba, Fabio; Aho-Mantila, Leena; Coster, David; Maddaluno, Giorgio; Nallo, Giuseppe F.; Sieglin, Bernard; Wenninger, Ronald; Zanino, Roberto
2018-03-01
In this paper we present a computational study on the divertor heat load mitigation through impurity injection for the EU DEMO. The study is performed by means of the SOLPS5.1 code. The power crossing the separatrix is considered fixed and corresponding to H-mode operation, whereas the machine operating condition is defined by the outboard mid-plane upstream electron density and the impurity level. The selected impurity for this study is Ar, based on its high radiation efficiency at SOL characteristic temperatures. We consider a conventional vertical target geometry for the EU DEMO and monitor target conditions for different operational points, considering as acceptability criteria the target electron temperature (≤5 eV to provide sufficiently low W sputtering rate) and the peak heat flux (below 5-10 MW m-2 to guarantee safe steady-state cooling conditions). Our simulations suggest that, neglecting the radiated power deposition on the plate, it is possible to satisfy the desired constraints. However, this requires an upstream density of the order of at least 50% of the Greenwald limit and a sufficiently high argon fraction. Furthermore, if the radiated power deposition is taken into account, the peak heat flux on the outer plate could not be reduced below 15 MW m-2 in these simulations. As these simulations do not take into account neutron loading, they strongly indicate that the vertical target divertor solution with a radiative front distributed along the divertor leg has a very marginal operational space in an EU DEMO sized reactor.
Underwater striling engine design with modified one-dimensional model
NASA Astrophysics Data System (ADS)
Li, Daijin; Qin, Kan; Luo, Kai
2015-09-01
Stirling engines are regarded as an efficient and promising power system for underwater devices. Currently, many researches on one-dimensional model is used to evaluate thermodynamic performance of Stirling engine, but in which there are still some aspects which cannot be modeled with proper mathematical models such as mechanical loss or auxiliary power. In this paper, a four-cylinder double-acting Stirling engine for Unmanned Underwater Vehicles (UUVs) is discussed. And a one-dimensional model incorporated with empirical equations of mechanical loss and auxiliary power obtained from experiments is derived while referring to the Stirling engine computer model of National Aeronautics and Space Administration (NASA). The P-40 Stirling engine with sufficient testing results from NASA is utilized to validate the accuracy of this one-dimensional model. It shows that the maximum error of output power of theoretical analysis results is less than 18% over testing results, and the maximum error of input power is no more than 9%. Finally, a Stirling engine for UUVs is designed with Schmidt analysis method and the modified one-dimensional model, and the results indicate this designed engine is capable of showing desired output power.
Power and Efficiency Optimized in Traveling-Wave Tubes Over a Broad Frequency Bandwidth
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.
2001-01-01
A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT's are critical components in deep space probes, communication satellites, and high-power radar systems. Power conversion efficiency is of paramount importance for TWT's employed in deep space probes and communication satellites. A previous effort was very successful in increasing efficiency and power at a single frequency (ref. 1). Such an algorithm is sufficient for narrow bandwidth designs, but for optimal designs in applications that require high radiofrequency power over a wide bandwidth, such as high-density communications or high-resolution radar, the variation of the circuit response with respect to frequency must be considered. This work at the NASA Glenn Research Center is the first to develop techniques for optimizing TWT efficiency and output power over a broad frequency bandwidth (ref. 2). The techniques are based on simulated annealing, which has the advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 3). Two new broadband simulated annealing algorithms were developed that optimize (1) minimum saturated power efficiency over a frequency bandwidth and (2) simultaneous bandwidth and minimum power efficiency over the frequency band with constant input power. The algorithms were incorporated into the NASA coupled-cavity TWT computer model (ref. 4) and used to design optimal phase velocity tapers using the 59- to 64-GHz Hughes 961HA coupled-cavity TWT as a baseline model. In comparison to the baseline design, the computational results of the first broad-band design algorithm show an improvement of 73.9 percent in minimum saturated efficiency (see the top graph). The second broadband design algorithm (see the bottom graph) improves minimum radiofrequency efficiency with constant input power drive by a factor of 2.7 at the high band edge (64 GHz) and increases simultaneous bandwidth by 500 MHz.
Acoustic Power Transmission Through a Ducted Fan
NASA Technical Reports Server (NTRS)
Envia, Ed
2016-01-01
For high-speed ducted fans, when the rotor flowfield is shock-free, the main contribution to the inlet radiated acoustic power comes from the portion of the rotor stator interaction sound field that is transmitted upstream through the rotor. As such, inclusion of the acoustic transmission is an essential ingredient in the prediction of the fan inlet noise when the fan tip relative speed is subsonic. This paper describes a linearized Euler based approach to computing the acoustic transmission of fan tones through the rotor. The approach is embodied in a code called LINFLUX was applied to a candidate subsonic fan called the Advanced Ducted Propulsor (ADP). The results from this study suggest that it is possible to make such prediction with sufficient fidelity to provide an indication of the acoustic transmission trends with the fan tip speed.
Guzik, Przemyslaw; Malik, Marek
Mobile electrocardiographs consist of three components: a mobile device (e.g. a smartphone), an electrocardiographic device or accessory, and a mobile application. Mobile platforms are small computers with sufficient computational power, good quality display, suitable data storage, and several possibilities of data transmission. Electrocardiographic electrodes and sensors for mobile use utilize unconventional materials, e.g. rubber, e-textile, and inkjet-printed nanoparticle electrodes. Mobile devices can be handheld, worn as vests or T-shirts, or attached to patient's skin as biopatches. Mobile electrocardiographic devices and accessories may additionally record other signals including respiratory rate, activity level, and geolocation. Large-scale clinical studies that utilize electrocardiography are easier to conduct using mobile technologies and the collected data are suitable for "big data" processing. This is expected to reveal phenomena so far inaccessible by standard electrocardiographic techniques. Copyright © 2016 Elsevier Inc. All rights reserved.
Energy-efficient hierarchical processing in the network of wireless intelligent sensors (WISE)
NASA Astrophysics Data System (ADS)
Raskovic, Dejan
Sensor network nodes have benefited from technological advances in the field of wireless communication, processing, and power sources. However, the processing power of microcontrollers is often not sufficient to perform sophisticated processing, while the power requirements of digital signal processing boards or handheld computers are usually too demanding for prolonged system use. We are matching the intrinsic hierarchical nature of many digital signal-processing applications with the natural hierarchy in distributed wireless networks, and building the hierarchical system of wireless intelligent sensors. Our goal is to build a system that will exploit the hierarchical organization to optimize the power consumption and extend battery life for the given time and memory constraints, while providing real-time processing of sensor signals. In addition, we are designing our system to be able to adapt to the current state of the environment, by dynamically changing the algorithm through procedure replacement. This dissertation presents the analysis of hierarchical environment and methods for energy profiling used to evaluate different system design strategies, and to optimize time-effective and energy-efficient processing.
Any Ontological Model of the Single Qubit Stabilizer Formalism must be Contextual
NASA Astrophysics Data System (ADS)
Lillystone, Piers; Wallman, Joel J.
Quantum computers allow us to easily solve some problems classical computers find hard. Non-classical improvements in computational power should be due to some non-classical property of quantum theory. Contextuality, a more general notion of non-locality, is a necessary, but not sufficient, resource for quantum speed-up. Proofs of contextuality can be constructed for the classically simulable stabilizer formalism. Previous proofs of stabilizer contextuality are known for 2 or more qubits, for example the Mermin-Peres magic square. In the work presented we extend these results and prove that any ontological model of the single qubit stabilizer theory must be contextual, as defined by R. Spekkens, and give a relation between our result and the Mermin-Peres square. By demonstrating that contextuality is present in the qubit stabilizer formalism we provide further insight into the contextuality present in quantum theory. Understanding the contextuality of classical sub-theories will allow us to better identify the physical properties of quantum theory required for computational speed up. This research was supported by CIFAR, the Government of Ontario, and the Government of Canada through NSERC and Industry Canada.
A Vision System For A Mars Rover
NASA Astrophysics Data System (ADS)
Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.
1987-01-01
A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.
Biophysics and systems biology.
Noble, Denis
2010-03-13
Biophysics at the systems level, as distinct from molecular biophysics, acquired its most famous paradigm in the work of Hodgkin and Huxley, who integrated their equations for the nerve impulse in 1952. Their approach has since been extended to other organs of the body, notably including the heart. The modern field of computational biology has expanded rapidly during the first decade of the twenty-first century and, through its contribution to what is now called systems biology, it is set to revise many of the fundamental principles of biology, including the relations between genotypes and phenotypes. Evolutionary theory, in particular, will require re-assessment. To succeed in this, computational and systems biology will need to develop the theoretical framework required to deal with multilevel interactions. While computational power is necessary, and is forthcoming, it is not sufficient. We will also require mathematical insight, perhaps of a nature we have not yet identified. This article is therefore also a challenge to mathematicians to develop such insights.
Biophysics and systems biology
Noble, Denis
2010-01-01
Biophysics at the systems level, as distinct from molecular biophysics, acquired its most famous paradigm in the work of Hodgkin and Huxley, who integrated their equations for the nerve impulse in 1952. Their approach has since been extended to other organs of the body, notably including the heart. The modern field of computational biology has expanded rapidly during the first decade of the twenty-first century and, through its contribution to what is now called systems biology, it is set to revise many of the fundamental principles of biology, including the relations between genotypes and phenotypes. Evolutionary theory, in particular, will require re-assessment. To succeed in this, computational and systems biology will need to develop the theoretical framework required to deal with multilevel interactions. While computational power is necessary, and is forthcoming, it is not sufficient. We will also require mathematical insight, perhaps of a nature we have not yet identified. This article is therefore also a challenge to mathematicians to develop such insights. PMID:20123750
A vision system for a Mars rover
NASA Technical Reports Server (NTRS)
Wilcox, Brian H.; Gennery, Donald B.; Mishkin, Andrew H.; Cooper, Brian K.; Lawton, Teri B.; Lay, N. Keith; Katzmann, Steven P.
1988-01-01
A Mars rover must be able to sense its local environment with sufficient resolution and accuracy to avoid local obstacles and hazards while moving a significant distance each day. Power efficiency and reliability are extremely important considerations, making stereo correlation an attractive method of range sensing compared to laser scanning, if the computational load and correspondence errors can be handled. Techniques for treatment of these problems, including the use of more than two cameras to reduce correspondence errors and possibly to limit the computational burden of stereo processing, have been tested at JPL. Once a reliable range map is obtained, it must be transformed to a plan view and compared to a stored terrain database, in order to refine the estimated position of the rover and to improve the database. The slope and roughness of each terrain region are computed, which form the basis for a traversability map allowing local path planning. Ongoing research and field testing of such a system is described.
Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.
2010-01-01
Epistasis or gene-gene interaction is a fundamental component of the genetic architecture of complex traits such as disease susceptibility. Multifactor dimensionality reduction (MDR) was developed as a nonparametric and model-free method to detect epistasis when there are no significant marginal genetic effects. However, in many studies of complex disease, other covariates like age of onset and smoking status could have a strong main effect and may potentially interfere with MDR's ability to achieve its goal. In this paper, we present a simple and computationally efficient sampling method to adjust for covariate effects in MDR. We use simulation to show that after adjustment, MDR has sufficient power to detect true gene-gene interactions. We also compare our method with the state-of-art technique in covariate adjustment. The results suggest that our proposed method performs similarly, but is more computationally efficient. We then apply this new method to an analysis of a population-based bladder cancer study in New Hampshire. PMID:20924193
Pc as Physics Computer for Lhc ?
NASA Astrophysics Data System (ADS)
Jarp, Sverre; Simmins, Antony; Tang, Hong; Yaari, R.
In the last five years, we have seen RISC workstations take over the computing scene that was once controlled by mainframes and supercomputers. In this paper we will argue that the same phenomenon might happen again. A project, active since March this year in the Physics Data Processing group, of CERN's CN division is described where ordinary desktop PCs running Windows (NT and 3.11) have been used for creating an environment for running large LHC batch jobs (initially the DICE simulation job of Atlas). The problems encountered in porting both the CERN library and the specific Atlas codes are described together with some encouraging benchmark results when comparing to existing RISC workstations in use by the Atlas collaboration. The issues of establishing the batch environment (Batch monitor, staging software, etc.) are also covered. Finally a quick extrapolation of commodity computing power available in the future is touched upon to indicate what kind of cost envelope could be sufficient for the simulation farms required by the LHC experiments.
Exploring the Feasibility of a DNA Computer: Design of an ALU Using Sticker-Based DNA Model.
Sarkar, Mayukh; Ghosal, Prasun; Mohanty, Saraju P
2017-09-01
Since its inception, DNA computing has advanced to offer an extremely powerful, energy-efficient emerging technology for solving hard computational problems with its inherent massive parallelism and extremely high data density. This would be much more powerful and general purpose when combined with other existing well-known algorithmic solutions that exist for conventional computing architectures using a suitable ALU. Thus, a specifically designed DNA Arithmetic and Logic Unit (ALU) that can address operations suitable for both domains can mitigate the gap between these two. An ALU must be able to perform all possible logic operations, including NOT, OR, AND, XOR, NOR, NAND, and XNOR; compare, shift etc., integer and floating point arithmetic operations (addition, subtraction, multiplication, and division). In this paper, design of an ALU has been proposed using sticker-based DNA model with experimental feasibility analysis. Novelties of this paper may be in manifold. First, the integer arithmetic operations performed here are 2s complement arithmetic, and the floating point operations follow the IEEE 754 floating point format, resembling closely to a conventional ALU. Also, the output of each operation can be reused for any next operation. So any algorithm or program logic that users can think of can be implemented directly on the DNA computer without any modification. Second, once the basic operations of sticker model can be automated, the implementations proposed in this paper become highly suitable to design a fully automated ALU. Third, proposed approaches are easy to implement. Finally, these approaches can work on sufficiently large binary numbers.
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
The DET/MPS programs model and simulate the Direct Energy Transfer and Multimission Spacecraft Modular Power System in order to aid both in design and in analysis of orbital energy balance. Typically, the DET power system has the solar array directly to the spacecraft bus, and the central building block of MPS is the Standard Power Regulator Unit. DET/MPS allows a minute-by-minute simulation of the power system's performance as it responds to various orbital parameters, focusing its output on solar array output and battery characteristics. While this package is limited in terms of orbital mechanics, it is sufficient to calculate eclipse and solar array data for circular or non-circular orbits. DET/MPS can be adjusted to run one or sequential orbits up to about one week, simulated time. These programs have been used on a variety of Goddard Space Flight Center spacecraft projects. DET/MPS is written in FORTRAN 77 with some VAX-type extensions. Any FORTRAN 77 compiler that includes VAX extensions should be able to compile and run the program with little or no modifications. The compiler must at least support free-form (or tab-delineated) source format and 'do do-while end-do' control structures. DET/MPS is available for three platforms: GSC-13374, for DEC VAX series computers running VMS, is available in DEC VAX Backup format on a 9-track 1600 BPI tape (standard distribution) or TK50 tape cartridge; GSC-13443, for UNIX-based computers, is available on a .25 inch streaming magnetic tape cartridge in UNIX tar format; and GSC-13444, for Macintosh computers running AU/X with either the NKR FORTRAN or AbSoft MacFORTRAN II compilers, is available on a 3.5 inch 800K Macintosh format diskette. Source code and test data are supplied. The UNIX version of DET requires 90K of main memory for execution. DET/MPS was developed in 1990. A/UX and Macintosh are registered trademarks of Apple Computer, Inc. VMS, DEC VAX and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories.
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
The DET/MPS programs model and simulate the Direct Energy Transfer and Multimission Spacecraft Modular Power System in order to aid both in design and in analysis of orbital energy balance. Typically, the DET power system has the solar array directly to the spacecraft bus, and the central building block of MPS is the Standard Power Regulator Unit. DET/MPS allows a minute-by-minute simulation of the power system's performance as it responds to various orbital parameters, focusing its output on solar array output and battery characteristics. While this package is limited in terms of orbital mechanics, it is sufficient to calculate eclipse and solar array data for circular or non-circular orbits. DET/MPS can be adjusted to run one or sequential orbits up to about one week, simulated time. These programs have been used on a variety of Goddard Space Flight Center spacecraft projects. DET/MPS is written in FORTRAN 77 with some VAX-type extensions. Any FORTRAN 77 compiler that includes VAX extensions should be able to compile and run the program with little or no modifications. The compiler must at least support free-form (or tab-delineated) source format and 'do do-while end-do' control structures. DET/MPS is available for three platforms: GSC-13374, for DEC VAX series computers running VMS, is available in DEC VAX Backup format on a 9-track 1600 BPI tape (standard distribution) or TK50 tape cartridge; GSC-13443, for UNIX-based computers, is available on a .25 inch streaming magnetic tape cartridge in UNIX tar format; and GSC-13444, for Macintosh computers running AU/X with either the NKR FORTRAN or AbSoft MacFORTRAN II compilers, is available on a 3.5 inch 800K Macintosh format diskette. Source code and test data are supplied. The UNIX version of DET requires 90K of main memory for execution. DET/MPS was developed in 1990. A/UX and Macintosh are registered trademarks of Apple Computer, Inc. VMS, DEC VAX and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories.
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
The DET/MPS programs model and simulate the Direct Energy Transfer and Multimission Spacecraft Modular Power System in order to aid both in design and in analysis of orbital energy balance. Typically, the DET power system has the solar array directly to the spacecraft bus, and the central building block of MPS is the Standard Power Regulator Unit. DET/MPS allows a minute-by-minute simulation of the power system's performance as it responds to various orbital parameters, focusing its output on solar array output and battery characteristics. While this package is limited in terms of orbital mechanics, it is sufficient to calculate eclipse and solar array data for circular or non-circular orbits. DET/MPS can be adjusted to run one or sequential orbits up to about one week, simulated time. These programs have been used on a variety of Goddard Space Flight Center spacecraft projects. DET/MPS is written in FORTRAN 77 with some VAX-type extensions. Any FORTRAN 77 compiler that includes VAX extensions should be able to compile and run the program with little or no modifications. The compiler must at least support free-form (or tab-delineated) source format and 'do do-while end-do' control structures. DET/MPS is available for three platforms: GSC-13374, for DEC VAX series computers running VMS, is available in DEC VAX Backup format on a 9-track 1600 BPI tape (standard distribution) or TK50 tape cartridge; GSC-13443, for UNIX-based computers, is available on a .25 inch streaming magnetic tape cartridge in UNIX tar format; and GSC-13444, for Macintosh computers running AU/X with either the NKR FORTRAN or AbSoft MacFORTRAN II compilers, is available on a 3.5 inch 800K Macintosh format diskette. Source code and test data are supplied. The UNIX version of DET requires 90K of main memory for execution. DET/MPS was developed in 1990. A/UX and Macintosh are registered trademarks of Apple Computer, Inc. VMS, DEC VAX and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories.
NASA Technical Reports Server (NTRS)
Gulden, L. E.; Rosero, E.; Yang, Z.-L.; Rodell, Matthew; Jackson, C. S.; Niu, G.-Y.; Yeh, P. J.-F.; Famiglietti, J. S.
2007-01-01
Land surface models (LSMs) are computer programs, similar to weather and climate prediction models, which simulate the storage and movement of water (including soil moisture, snow, evaporation, and runoff) after it falls to the ground as precipitation. It is not currently possible to measure all of the variables of interest everywhere on Earth with sufficient accuracy. Hence LSMs have been developed to integrate the available information, including satellite observations, using powerful computers, in order to track water storage and redistribution. The maps are used to improve weather forecasts, support water resources and agricultural applications, and study the Earth's water cycle and climate variability. Recently, the models have begun to simulate groundwater storage. In this paper, we compare several possible approaches, and examine the pitfalls associated with trying to estimate aquifer parameters (such as porosity) that are required by the models. We find that explicit representation of groundwater, as opposed to the addition of deeper soil layers, considerably decreases the sensitivity of modeled terrestrial water storage to aquifer parameter choices. We also show that approximate knowledge of parameter values is not sufficient to guarantee realistic model performance: because interaction among parameters is significant, they must be prescribed as a harmonious set.
A Big Data Approach to Analyzing Market Volatility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Bethel, E. Wes; Gu, Ming
2013-06-05
Understanding the microstructure of the financial market requires the processing of a vast amount of data related to individual trades, and sometimes even multiple levels of quotes. Analyzing such a large volume of data requires tremendous computing power that is not easily available to financial academics and regulators. Fortunately, public funded High Performance Computing (HPC) power is widely available at the National Laboratories in the US. In this paper we demonstrate that the HPC resource and the techniques for data-intensive sciences can be used to greatly accelerate the computation of an early warning indicator called Volume-synchronized Probability of Informed tradingmore » (VPIN). The test data used in this study contains five and a half year's worth of trading data for about 100 most liquid futures contracts, includes about 3 billion trades, and takes 140GB as text files. By using (1) a more efficient file format for storing the trading records, (2) more effective data structures and algorithms, and (3) parallelizing the computations, we are able to explore 16,000 different ways of computing VPIN in less than 20 hours on a 32-core IBM DataPlex machine. Our test demonstrates that a modest computer is sufficient to monitor a vast number of trading activities in real-time – an ability that could be valuable to regulators. Our test results also confirm that VPIN is a strong predictor of liquidity-induced volatility. With appropriate parameter choices, the false positive rates are about 7% averaged over all the futures contracts in the test data set. More specifically, when VPIN values rise above a threshold (CDF > 0.99), the volatility in the subsequent time windows is higher than the average in 93% of the cases.« less
Active Flash: Out-of-core Data Analytics on Flash Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S
2012-01-01
Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less
Ultrasound power deposition model for the chest wall.
Moros, E G; Fan, X; Straube, W L
1999-10-01
An ultrasound power deposition model for the chest wall was developed based on secondary-source and plane-wave theories. The anatomic model consisted of a muscle-ribs-lung volume, accounted for wave reflection and refraction at muscle-rib and muscle-lung interfaces, and computed power deposition due to the propagation of both reflected and transmitted waves. Lung tissue was assumed to be air-equivalent. The parts of the theory and numerical program dealing with reflection were experimentally evaluated by comparing simulations with acoustic field measurements using several pertinent reflecting materials. Satisfactory agreement was found. A series of simulations were performed to study the influence of angle of incidence of the beam, frequency, and thickness of muscle tissue overlying the ribs on power deposition distributions that may be expected during superficial ultrasound (US) hyperthermia of chest wall recurrences. Both reflection at major interfaces and attenuation in bone were the determining factors affecting power deposition, the dominance of one vs. the other depending on the angle of incidence of the beam. Sufficient energy is reflected by these interfaces to suggest that improvements in thermal doses to overlying tissues are possible with adequate manipulation of the sound field (advances in ultrasonic heating devices) and prospective treatment planning.
Upper Body-Based Power Wheelchair Control Interface for Individuals With Tetraplegia.
Thorp, Elias B; Abdollahi, Farnaz; Chen, David; Farshchiansadegh, Ali; Lee, Mei-Hua; Pedersen, Jessica P; Pierella, Camilla; Roth, Elliot J; Seanez Gonzalez, Ismael; Mussa-Ivaldi, Ferdinando A
2016-02-01
Many power wheelchair control interfaces are not sufficient for individuals with severely limited upper limb mobility. The majority of controllers that do not rely on coordinated arm and hand movements provide users a limited vocabulary of commands and often do not take advantage of the user's residual motion. We developed a body-machine interface (BMI) that leverages the flexibility and customizability of redundant control by using high dimensional changes in shoulder kinematics to generate proportional control commands for a power wheelchair. In this study, three individuals with cervical spinal cord injuries were able to control a power wheelchair safely and accurately using only small shoulder movements. With the BMI, participants were able to achieve their desired trajectories and, after five sessions driving, were able to achieve smoothness that was similar to the smoothness with their current joystick. All participants were twice as slow using the BMI however improved with practice. Importantly, users were able to generalize training controlling a computer to driving a power wheelchair, and employed similar strategies when controlling both devices. Overall, this work suggests that the BMI can be an effective wheelchair control interface for individuals with high-level spinal cord injuries who have limited arm and hand control.
QRS detection based ECG quality assessment.
Hayn, Dieter; Jammerbund, Bernhard; Schreier, Günter
2012-09-01
Although immediate feedback concerning ECG signal quality during recording is useful, up to now not much literature describing quality measures is available. We have implemented and evaluated four ECG quality measures. Empty lead criterion (A), spike detection criterion (B) and lead crossing point criterion (C) were calculated from basic signal properties. Measure D quantified the robustness of QRS detection when applied to the signal. An advanced Matlab-based algorithm combining all four measures and a simplified algorithm for Android platforms, excluding measure D, were developed. Both algorithms were evaluated by taking part in the Computing in Cardiology Challenge 2011. Each measure's accuracy and computing time was evaluated separately. During the challenge, the advanced algorithm correctly classified 93.3% of the ECGs in the training-set and 91.6 % in the test-set. Scores for the simplified algorithm were 0.834 in event 2 and 0.873 in event 3. Computing time for measure D was almost five times higher than for other measures. Required accuracy levels depend on the application and are related to computing time. While our simplified algorithm may be accurate for real-time feedback during ECG self-recordings, QRS detection based measures can further increase the performance if sufficient computing power is available.
Flight test trajectory control analysis
NASA Technical Reports Server (NTRS)
Walker, R.; Gupta, N.
1983-01-01
Recent extensions to optimal control theory applied to meaningful linear models with sufficiently flexible software tools provide powerful techniques for designing flight test trajectory controllers (FTTCs). This report describes the principal steps for systematic development of flight trajectory controllers, which can be summarized as planning, modeling, designing, and validating a trajectory controller. The techniques have been kept as general as possible and should apply to a wide range of problems where quantities must be computed and displayed to a pilot to improve pilot effectiveness and to reduce workload and fatigue. To illustrate the approach, a detailed trajectory guidance law is developed and demonstrated for the F-15 aircraft flying the zoom-and-pushover maneuver.
Goodness-of-fit tests for open capture-recapture models
Pollock, K.H.; Hines, J.E.; Nichols, J.D.
1985-01-01
General goodness-of-fit tests for the Jolly-Seber model are proposed. These tests are based on conditional arguments using minimal sufficient statistics. The tests are shown to be of simple hypergeometric form so that a series of independent contingency table chi-square tests can be performed. The relationship of these tests to other proposed tests is discussed. This is followed by a simulation study of the power of the tests to detect departures from the assumptions of the Jolly-Seber model. Some meadow vole capture-recapture data are used to illustrate the testing procedure which has been implemented in a computer program available from the authors.
PLAYGROUND: preparing students for the cyber battleground
NASA Astrophysics Data System (ADS)
Nielson, Seth James
2016-12-01
Attempting to educate practitioners of computer security can be difficult if for no other reason than the breadth of knowledge required today. The security profession includes widely diverse subfields including cryptography, network architectures, programming, programming languages, design, coding practices, software testing, pattern recognition, economic analysis, and even human psychology. While an individual may choose to specialize in one of these more narrow elements, there is a pressing need for practitioners that have a solid understanding of the unifying principles of the whole. We created the Playground network simulation tool and used it in the instruction of a network security course to graduate students. This tool was created for three specific purposes. First, it provides simulation sufficiently powerful to permit rigorous study of desired principles while simultaneously reducing or eliminating unnecessary and distracting complexities. Second, it permitted the students to rapidly prototype a suite of security protocols and mechanisms. Finally, with equal rapidity, the students were able to develop attacks against the protocols that they themselves had created. Based on our own observations and student reviews, we believe that these three features combine to create a powerful pedagogical tool that provides students with a significant amount of breadth and intense emotional connection to computer security in a single semester.
NASA Astrophysics Data System (ADS)
Laakso, Ilkka
2009-06-01
This paper presents finite-difference time-domain (FDTD) calculations of specific absorption rate (SAR) values in the head under plane-wave exposure from 1 to 10 GHz using a resolution of 0.5 mm in adult male and female voxel models. Temperature rise due to the power absorption is calculated by the bioheat equation using a multigrid method solver. The computational accuracy is investigated by repeating the calculations with resolutions of 1 mm and 2 mm and comparing the results. Cubically averaged 10 g SAR in the eyes and brain and eye-averaged SAR are calculated and compared to the corresponding temperature rise as well as the recommended limits for exposure. The results suggest that 2 mm resolution should only be used for frequencies smaller than 2.5 GHz, and 1 mm resolution only under 5 GHz. Morphological differences in models seemed to be an important cause of variation: differences in results between the two different models were usually larger than the computational error due to the grid resolution, and larger than the difference between the results for open and closed eyes. Limiting the incident plane-wave power density to smaller than 100 W m-2 was sufficient for ensuring that the temperature rise in the eyes and brain were less than 1 °C in the whole frequency range.
Aeronautical audio broadcasting via satellite
NASA Technical Reports Server (NTRS)
Tzeng, Forrest F.
1993-01-01
A system design for aeronautical audio broadcasting, with C-band uplink and L-band downlink, via Inmarsat space segments is presented. Near-transparent-quality compression of 5-kHz bandwidth audio at 20.5 kbit/s is achieved based on a hybrid technique employing linear predictive modeling and transform-domain residual quantization. Concatenated Reed-Solomon/convolutional codes with quadrature phase shift keying are selected for bandwidth and power efficiency. RF bandwidth at 25 kHz per channel, and a decoded bit error rate at 10(exp -6) with E(sub b)/N(sub o) at 3.75 dB are obtained. An interleaver, scrambler, modem synchronization, and frame format were designed, and frequency-division multiple access was selected over code-division multiple access. A link budget computation based on a worst-case scenario indicates sufficient system power margins. Transponder occupancy analysis for 72 audio channels demonstrates ample remaining capacity to accommodate emerging aeronautical services.
NASA Technical Reports Server (NTRS)
Wallace, J. W.; Lovelady, R. W.; Ferguson, R. L.
1981-01-01
A prototype water quality monitoring system is described which offers almost continuous in situ monitoring. The two-man portable system features: (1) a microprocessor controlled central processing unit which allows preprogrammed sampling schedules and reprogramming in situ; (2) a subsurface unit for multiple depth capability and security from vandalism; (3) an acoustic data link for communications between the subsurface unit and the surface control unit; (4) eight water quality parameter sensors; (5) a nonvolatile magnetic bubble memory which prevents data loss in the event of power interruption; (6) a rechargeable power supply sufficient for 2 weeks of unattended operation; (7) a water sampler which can collect samples for laboratory analysis; (8) data output in direct engineering units on printed tape or through a computer compatible link; (9) internal electronic calibration eliminating external sensor adjustment; and (10) acoustic location and recovery systems. Data obtained in Saginaw Bay, Lake Huron are tabulated.
Effects of optical layer impairments on 2.5 Gb/s optical CDMA transmission.
Feng, H; Mendez, A; Heritage, J; Lennon, W
2000-07-03
We conducted a computer simulation study to assess the effects of optical layer impairments on optical CDMA (O-CDMA) transmission of 8 asynchronous users at 2.5 Gb/s each user over a 214-km link. It was found that with group velocity dispersion compensation, two other residual effects, namely, the nonzero chromatic dispersion slope of the single mode fiber (which causes skew) and the non-uniform EDFA gain (which causes interference power level to exceed signal power level of some codes) degrade the signal to multi-access interference (MAI) ratio. In contrast, four wave mixing and modulation due to the Kerr and Raman contributions to the fiber nonlinear refractive index are less important. Current wavelength-division multiplexing (WDM) technologies, including dispersion management, EDFA gain flattening, and 3 rd order dispersion compensation, are sufficient to overcome the impairments to the O-CDMA transmission system that we considered.
Hyper-X Mach 7 Scramjet Design, Ground Test and Flight Results
NASA Technical Reports Server (NTRS)
Ferlemann, Shelly M.; McClinton, Charles R.; Rock, Ken E.; Voland, Randy T.
2005-01-01
The successful Mach 7 flight test of the Hyper-X (X-43) research vehicle has provided the major, essential demonstration of the capability of the airframe integrated scramjet engine. This flight was a crucial first step toward realizing the potential for airbreathing hypersonic propulsion for application to space launch vehicles. However, it is not sufficient to have just achieved a successful flight. The more useful knowledge gained from the flight is how well the prediction methods matched the actual test results in order to have confidence that these methods can be applied to the design of other scramjet engines and powered vehicles. The propulsion predictions for the Mach 7 flight test were calculated using the computer code, SRGULL, with input from computational fluid dynamics (CFD) and wind tunnel tests. This paper will discuss the evolution of the Mach 7 Hyper-X engine, ground wind tunnel experiments, propulsion prediction methodology, flight results and validation of design methods.
Multinode reconfigurable pipeline computer
NASA Technical Reports Server (NTRS)
Nosenchuck, Daniel M. (Inventor); Littman, Michael G. (Inventor)
1989-01-01
A multinode parallel-processing computer is made up of a plurality of innerconnected, large capacity nodes each including a reconfigurable pipeline of functional units such as Integer Arithmetic Logic Processors, Floating Point Arithmetic Processors, Special Purpose Processors, etc. The reconfigurable pipeline of each node is connected to a multiplane memory by a Memory-ALU switch NETwork (MASNET). The reconfigurable pipeline includes three (3) basic substructures formed from functional units which have been found to be sufficient to perform the bulk of all calculations. The MASNET controls the flow of signals from the memory planes to the reconfigurable pipeline and vice versa. the nodes are connectable together by an internode data router (hyperspace router) so as to form a hypercube configuration. The capability of the nodes to conditionally configure the pipeline at each tick of the clock, without requiring a pipeline flush, permits many powerful algorithms to be implemented directly.
Preston, Nick; Weightman, Andrew; Gallagher, Justin; Levesley, Martin; Mon-Williams, Mark; Clarke, Mike; O'Connor, Rory J
2016-10-01
To evaluate the potential benefits of computer-assisted arm rehabilitation gaming technology on arm function of children with spastic cerebral palsy. A single-blind randomized controlled trial design. Power calculations indicated that 58 children would be required to demonstrate a clinically important difference. Intervention was home-based; recruitment took place in regional spasticity clinics. A total of 15 children with cerebral palsy aged five to 12 years were recruited; eight to the device group. Both study groups received 'usual follow-up treatment' following spasticity treatment with botulinum toxin; the intervention group also received a rehabilitation gaming device. ABILHAND-kids and Canadian Occupational Performance Measure were performed by blinded assessors at baseline, six and 12 weeks. An analysis of covariance showed no group differences in mean ABILHAND-kids scores between time points. A non-parametric analysis of variance on Canadian Occupational Performance Measure scores showed a statistically significant improvement across time points (χ 2 (2,15) = 6.778, p = 0.031), but this improvement did not reach minimal clinically important difference. Mean daily device use was seven minutes. Recruitment did not reach target owing to unanticipated staff shortages in clinical services. Feedback from children and their families indicated that the games were not sufficiently engaging to promote sufficient use that was likely to result in functional benefits. This study suggests that computer-assisted arm rehabilitation gaming does not benefit arm function, but a Type II error cannot be ruled out. © The Author(s) 2015.
A Study on Cost Allocation in Nuclear Power Coupled with Desalination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, ManKi; Kim, SeungSu; Moon, KeeHwan
As for a single-purpose desalination plant, there is no particular difficulty in computing the unit cost of the water, which is obtained by dividing the annual total costs by the output of fresh water. When it comes to a dual-purpose plant, cost allocation is needed between the two products. No cost allocation is needed in some cases where two alternatives producing the same water and electricity output are to be compared. In these cases, the consideration of the total cost is then sufficient. This study assumes MED (Multi-Effect Distillation) technology is adopted when nuclear power is coupled with desalination. Themore » total production cost of the two commodities in dual-purpose plant can easily be obtained by using costing methods, if the necessary raw data are available. However, it is not easy to calculate a separate cost for each product, because high-pressure steam plant costs cannot be allocated to one or the other without adopting arbitrary methods. Investigation on power credit method is carried out focusing on the cost allocation of combined benefits due to dual production, electricity and water. The illustrative calculation is taken from Preliminary Economic Feasibility Study of Nuclear Desalination in Madura Island, Indonesia. The study is being performed by BATAN (National Nuclear Energy Agency), KAERI (Korean Atomic Energy Research Institute) and under support of the IAEA (International Atomic Energy Agency) started in the year 2002 in order to perform a preliminary economic feasibility in providing the Madurese with sufficient power and potable water for the public and to support industrialization and tourism in Madura Region. The SMART reactor coupled with MED is considered to be an option to produce electricity and potable water. This study indicates that the correct recognition of combined benefits attributable to dual production is important in carrying out economics of desalination coupled with nuclear power. (authors)« less
Targeted On-Demand Team Performance App Development
2016-10-01
from three sites; 6) Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes...statistical analyses, and examine any resulting qualitative data for trends or connections to statistical outcomes. On Schedule 21 Predictive...Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes. What opportunities for
Power throttling of collections of computing elements
Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY
2011-08-16
An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.
Situational Awareness from a Low-Cost Camera System
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Ward, David; Lesage, John
2010-01-01
A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.
Autonomous self-powered structural health monitoring system
NASA Astrophysics Data System (ADS)
Qing, Xinlin P.; Anton, Steven R.; Zhang, David; Kumar, Amrita; Inman, Daniel J.; Ooi, Teng K.
2010-03-01
Structural health monitoring technology is perceived as a revolutionary method of determining the integrity of structures involving the use of multidisciplinary fields including sensors, materials, system integration, signal processing and interpretation. The core of the technology is the development of self-sufficient systems for the continuous monitoring, inspection and damage detection of structures with minimal labor involvement. A major drawback of the existing technology for real-time structural health monitoring is the requirement for external electrical power input. For some applications, such as missiles or combat vehicles in the field, this factor can drastically limit the use of the technology. Having an on-board electrical power source that is independent of the vehicle power system can greatly enhance the SHM system and make it a completely self-contained system. In this paper, using the SMART layer technology as a basis, an Autonomous Self-powered (ASP) Structural Health Monitoring (SHM) system has been developed to solve the major challenge facing the transition of SHM systems into field applications. The architecture of the self-powered SHM system was first designed. There are four major components included in the SHM system: SMART Layer with sensor network, low power consumption diagnostic hardware, rechargeable battery with energy harvesting device, and host computer with supporting software. A prototype of the integrated self-powered active SHM system was built for performance and functionality testing. Results from the evaluation tests demonstrated that a fully charged battery system is capable of powering the SHM system for active scanning up to 10 hours.
NASA Astrophysics Data System (ADS)
Dai, Quanqi; Harne, Ryan L.
2017-04-01
Effective development of vibration energy harvesters is required to convert ambient kinetic energy into useful electrical energy as power supply for sensors, for example in structural health monitoring applications. Energy harvesting structures exhibiting bistable nonlinearities have previously been shown to generate large alternating current (AC) power when excited so as to undergo snap-through responses between stable equilibria. Yet, most microelectronics in sensors require rectified voltages and hence direct current (DC) power. While researchers have studied DC power generation from bistable energy harvesters subjected to harmonic excitations, there remain important questions as to the promise of such harvester platforms when the excitations are more realistic and include both harmonic and random components. To close this knowledge gap, this research computationally and experimentally studies the DC power delivery from bistable energy harvesters subjected to such realistic excitation combinations as those found in practice. Based on the results, it is found that the ability for bistable energy harvesters to generate peak DC power is significantly reduced by introducing sufficient amount of stochastic excitations into an otherwise harmonic input. On the other hand, the elimination of a low amplitude, coexistent response regime by way of the additive noise promotes power delivery if the device was not originally excited to snap-through. The outcomes of this research indicate the necessity for comprehensive studies about the sensitivities of DC power generation from bistable energy harvester to practical excitation scenarios prior to their optimal deployment in applications.
Towards a real-time wide area motion imagery system
NASA Astrophysics Data System (ADS)
Young, R. I.; Foulkes, S. B.
2015-10-01
It is becoming increasingly important in both the defence and security domains to conduct persistent wide area surveillance (PWAS) of large populations of targets. Wide Area Motion Imagery (WAMI) is a key technique for achieving this wide area surveillance. The recent development of multi-million pixel sensors has provided sensors with wide field of view replete with sufficient resolution for detection and tracking of objects of interest to be achieved across these extended areas of interest. WAMI sensors simultaneously provide high spatial and temporal resolutions, giving extreme pixel counts over large geographical areas. The high temporal resolution is required to enable effective tracking of targets. The provision of wide area coverage with high frame rates generates data deluge issues; these are especially profound if the sensor is mounted on an airborne platform, with finite data-link bandwidth and processing power that is constrained by size, weight and power (SWAP) limitations. These issues manifest themselves either as bottlenecks in the transmission of the imagery off-board or as latency in the time taken to analyse the data due to limited computational processing power.
Upper Body-Based Power Wheelchair Control Interface for Individuals with Tetraplegia
Thorp, Elias B.; Abdollahi, Farnaz; Chen, David; Farshchiansadegh, Ali; Lee, Mei-Hua; Pedersen, Jessica; Pierella, Camilla; Roth, Elliot J.; Gonzalez, Ismael Seanez; Mussa-Ivaldi, Ferdinando A.
2016-01-01
Many power wheelchair control interfaces are not sufficient for individuals with severely limited upper limb mobility. The majority of controllers that do not rely on coordinated arm and hand movements provide users a limited vocabulary of commands and often do not take advantage of the user’s residual motion. We developed a body-machine interface (BMI) that leverages the flexibility and customizability of redundant control by using high dimensional changes in shoulder kinematics to generate proportional controls commands for a power wheelchair. In this study, three individuals with cervical spinal cord injuries were able to control the power wheelchair safely and accurately using only small shoulder movements. With the BMI, participants were able to achieve their desired trajectories and, after five sessions driving, were able to achieve smoothness that was similar to the smoothness with their current joystick. All participants were twice as slow using the BMI however improved with practice. Importantly, users were able to generalize training controlling a computer to driving a power wheelchair, and employed similar strategies when controlling both devices. Overall, this work suggests that the BMI can be an effective wheelchair control interface for individuals with high-level spinal cord injuries who have limited arm and hand control. PMID:26054071
Interactive Spectral Analysis and Computation (ISAAC)
NASA Technical Reports Server (NTRS)
Lytle, D. M.
1992-01-01
Isaac is a task in the NSO external package for IRAF. A descendant of a FORTRAN program written to analyze data from a Fourier transform spectrometer, the current implementation has been generalized sufficiently to make it useful for general spectral analysis and other one dimensional data analysis tasks. The user interface for Isaac is implemented as an interpreted mini-language containing a powerful, programmable vector calculator. Built-in commands provide much of the functionality needed to produce accurate line lists from input spectra. These built-in functions include automated spectral line finding, least squares fitting of Voigt profiles to spectral lines including equality constraints, various filters including an optimal filter construction tool, continuum fitting, and various I/O functions.
Prediction of dry ice mass for firefighting robot actuation
NASA Astrophysics Data System (ADS)
Ajala, M. T.; Khan, Md R.; Shafie, A. A.; Salami, MJE; Mohamad Nor, M. I.
2017-11-01
The limitation in the performance of electric actuated firefighting robots in high-temperature fire environment has led to research on the alternative propulsion system for the mobility of firefighting robots in such environment. Capitalizing on the limitations of these electric actuators we suggested a gas-actuated propulsion system in our earlier study. The propulsion system is made up of a pneumatic motor as the actuator (for the robot) and carbon dioxide gas (self-generated from dry ice) as the power source. To satisfy the consumption requirement (9cfm) of the motor for efficient actuation of the robot in the fire environment, the volume of carbon dioxide gas, as well as the corresponding mass of the dry ice that will produce the required volume for powering and actuation of the robot, must be determined. This article, therefore, presents the computational analysis to predict the volumetric requirement and the dry ice mass sufficient to power a carbon dioxide gas propelled autonomous firefighting robot in a high-temperature environment. The governing equation of the sublimation of dry ice to carbon dioxide is established. An operating time of 2105.53s and operating pressure ranges from 137.9kPa to 482.65kPa were achieved following the consumption rate of the motor. Thus, 8.85m3 is computed as the volume requirement of the CAFFR while the corresponding dry ice mass for the CAFFR actuation ranges from 21.67kg to 75.83kg depending on the operating pressure.
Enzyme-like replication de novo in a microcontroller environment.
Tangen, Uwe
2010-01-01
The desire to start evolution from scratch inside a computer memory is as old as computing. Here we demonstrate how viable computer programs can be established de novo in a Precambrian environment without supplying any specific instantiation, just starting with random bit sequences. These programs are not self-replicators, but act much more like catalysts. The microcontrollers used in the end are the result of a long series of simplifications. The objective of this simplification process was to produce universal machines with a human-readable interface, allowing software and/or hardware evolution to be studied. The power of the instruction set can be modified by introducing a secondary structure-folding mechanism, which is a state machine, allowing nontrivial replication to emerge with an instruction width of only a few bits. This state-machine approach not only attenuates the problems of brittleness and encoding functionality (too few bits available for coding, and too many instructions needed); it also enables the study of hardware evolution as such. Furthermore, the instruction set is sufficiently powerful to permit external signals to be processed. This information-theoretic approach forms one vertex of a triangle alongside artificial cell research and experimental research on the creation of life. Hopefully this work helps develop an understanding of how information—in a similar sense to the account of functional information described by Hazen et al.—is created by evolution and how this information interacts with or is embedded in its physico-chemical environment.
Efficient exploration of cosmology dependence in the EFT of LSS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cataneo, Matteo; Foreman, Simon; Senatore, Leonardo, E-mail: matteoc@dark-cosmology.dk, E-mail: sfore@stanford.edu, E-mail: senatore@stanford.edu
The most effective use of data from current and upcoming large scale structure (LSS) and CMB observations requires the ability to predict the clustering of LSS with very high precision. The Effective Field Theory of Large Scale Structure (EFTofLSS) provides an instrument for performing analytical computations of LSS observables with the required precision in the mildly nonlinear regime. In this paper, we develop efficient implementations of these computations that allow for an exploration of their dependence on cosmological parameters. They are based on two ideas. First, once an observable has been computed with high precision for a reference cosmology, formore » a new cosmology the same can be easily obtained with comparable precision just by adding the difference in that observable, evaluated with much less precision. Second, most cosmologies of interest are sufficiently close to the Planck best-fit cosmology that observables can be obtained from a Taylor expansion around the reference cosmology. These ideas are implemented for the matter power spectrum at two loops and are released as public codes. When applied to cosmologies that are within 3σ of the Planck best-fit model, the first method evaluates the power spectrum in a few minutes on a laptop, with results that have 1% or better precision, while with the Taylor expansion the same quantity is instantly generated with similar precision. The ideas and codes we present may easily be extended for other applications or higher-precision results.« less
Efficient exploration of cosmology dependence in the EFT of LSS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cataneo, Matteo; Foreman, Simon; Senatore, Leonardo
The most effective use of data from current and upcoming large scale structure (LSS) and CMB observations requires the ability to predict the clustering of LSS with very high precision. The Effective Field Theory of Large Scale Structure (EFTofLSS) provides an instrument for performing analytical computations of LSS observables with the required precision in the mildly nonlinear regime. In this paper, we develop efficient implementations of these computations that allow for an exploration of their dependence on cosmological parameters. They are based on two ideas. First, once an observable has been computed with high precision for a reference cosmology, formore » a new cosmology the same can be easily obtained with comparable precision just by adding the difference in that observable, evaluated with much less precision. Second, most cosmologies of interest are sufficiently close to the Planck best-fit cosmology that observables can be obtained from a Taylor expansion around the reference cosmology. These ideas are implemented for the matter power spectrum at two loops and are released as public codes. When applied to cosmologies that are within 3σ of the Planck best-fit model, the first method evaluates the power spectrum in a few minutes on a laptop, with results that have 1% or better precision, while with the Taylor expansion the same quantity is instantly generated with similar precision. Finally, the ideas and codes we present may easily be extended for other applications or higher-precision results.« less
Efficient exploration of cosmology dependence in the EFT of LSS
Cataneo, Matteo; Foreman, Simon; Senatore, Leonardo
2017-04-18
The most effective use of data from current and upcoming large scale structure (LSS) and CMB observations requires the ability to predict the clustering of LSS with very high precision. The Effective Field Theory of Large Scale Structure (EFTofLSS) provides an instrument for performing analytical computations of LSS observables with the required precision in the mildly nonlinear regime. In this paper, we develop efficient implementations of these computations that allow for an exploration of their dependence on cosmological parameters. They are based on two ideas. First, once an observable has been computed with high precision for a reference cosmology, formore » a new cosmology the same can be easily obtained with comparable precision just by adding the difference in that observable, evaluated with much less precision. Second, most cosmologies of interest are sufficiently close to the Planck best-fit cosmology that observables can be obtained from a Taylor expansion around the reference cosmology. These ideas are implemented for the matter power spectrum at two loops and are released as public codes. When applied to cosmologies that are within 3σ of the Planck best-fit model, the first method evaluates the power spectrum in a few minutes on a laptop, with results that have 1% or better precision, while with the Taylor expansion the same quantity is instantly generated with similar precision. Finally, the ideas and codes we present may easily be extended for other applications or higher-precision results.« less
Profiling an application for power consumption during execution on a compute node
Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E
2013-09-17
Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING EMERGENCY LIGHTING AND POWER SYSTEMS... dedicated emergency power source with sufficient capacity to supply those services that are necessary for... power source, except: (1) A load required by this part to be powered from the emergency power source; (2...
Code of Federal Regulations, 2011 CFR
2011-10-01
..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING EMERGENCY LIGHTING AND POWER SYSTEMS... dedicated emergency power source with sufficient capacity to supply those services that are necessary for... power source, except: (1) A load required by this part to be powered from the emergency power source; (2...
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
An Analysis on the Detection of Biological Contaminants Aboard Aircraft
Hwang, Grace M.; DiCarlo, Anthony A.; Lin, Gene C.
2011-01-01
The spread of infectious disease via commercial airliner travel is a significant and realistic threat. To shed some light on the feasibility of detecting airborne pathogens, a sensor integration study has been conducted and computational investigations of contaminant transport in an aircraft cabin have been performed. Our study took into consideration sensor sensitivity as well as the time-to-answer, size, weight and the power of best available commercial off-the-shelf (COTS) devices. We conducted computational fluid dynamics simulations to investigate three types of scenarios: (1) nominal breathing (up to 20 breaths per minute) and coughing (20 times per hour); (2) nominal breathing and sneezing (4 times per hour); and (3) nominal breathing only. Each scenario was implemented with one or seven infectious passengers expelling air and sneezes or coughs at the stated frequencies. Scenario 2 was implemented with two additional cases in which one infectious passenger expelled 20 and 50 sneezes per hour, respectively. All computations were based on 90 minutes of sampling using specifications from a COTS aerosol collector and biosensor. Only biosensors that could provide an answer in under 20 minutes without any manual preparation steps were included. The principal finding was that the steady-state bacteria concentrations in aircraft would be high enough to be detected in the case where seven infectious passengers are exhaling under scenarios 1 and 2 and where one infectious passenger is actively exhaling in scenario 2. Breathing alone failed to generate sufficient bacterial particles for detection, and none of the scenarios generated sufficient viral particles for detection to be feasible. These results suggest that more sensitive sensors than the COTS devices currently available and/or sampling of individual passengers would be needed for the detection of bacteria and viruses in aircraft. PMID:21264266
Square Kilometre Array Science Data Processing
NASA Astrophysics Data System (ADS)
Nikolic, Bojan; SDP Consortium, SKA
2014-04-01
The Square Kilometre Array (SKA) is planned to be, by a large factor, the largest and most sensitive radio telescope ever constructed. The first phase of the telescope (SKA1), now in the design phase, will in itself represent a major leap in capabilities compared to current facilities. These advances are to a large extent being made possible by advances in available computer processing power so that that larger numbers of smaller, simpler and cheaper receptors can be used. As a result of greater reliance and demands on computing, ICT is becoming an ever more integral part of the telescope. The Science Data Processor is the part of the SKA system responsible for imaging, calibration, pulsar timing, confirmation of pulsar candidates, derivation of some further derived data products, archiving and providing the data to the users. It will accept visibilities at data rates at several TB/s and require processing power for imaging in range 100 petaFLOPS -- ~1 ExaFLOPS, putting SKA1 into the regime of exascale radio astronomy. In my talk I will present the overall SKA system requirements and how they drive these high data throughput and processing requirements. Some of the key challenges for the design of SDP are: - Identifying sufficient parallelism to utilise very large numbers of separate compute cores that will be required to provide exascale computing throughput - Managing efficiently the high internal data flow rates - A conceptual architecture and software engineering approach that will allow adaptation of the algorithms as we learn about the telescope and the atmosphere during the commissioning and operational phases - System management that will deal gracefully with (inevitably frequent) failures of individual units of the processing system In my talk I will present possible initial architectures for the SDP system that attempt to address these and other challenges.
NASA Technical Reports Server (NTRS)
Dayton, J. A., Jr.; Kosmahl, H. G.; Ramins, P.; Stankiewicz, N.
1979-01-01
Experimental and analytical results are compared for two high performance, octave bandwidth TWT's that use depressed collectors (MDC's) to improve the efficiency. The computations were carried out with advanced, multidimensional computer programs that are described here in detail. These programs model the electron beam as a series of either disks or rings of charge and follow their multidimensional trajectories from the RF input of the ideal TWT, through the slow wave structure, through the magnetic refocusing system, to their points of impact in the depressed collector. Traveling wave tube performance, collector efficiency, and collector current distribution were computed and the results compared with measurements for a number of TWT-MDC systems. Power conservation and correct accounting of TWT and collector losses were observed. For the TWT's operating at saturation, very good agreement was obtained between the computed and measured collector efficiencies. For a TWT operating 3 and 6 dB below saturation, excellent agreement between computed and measured collector efficiencies was obtained in some cases but only fair agreement in others. However, deviations can largely be explained by small differences in the computed and actual spent beam energy distributions. The analytical tools used here appear to be sufficiently refined to design efficient collectors for this class of TWT. However, for maximum efficiency, some experimental optimization (e.g., collector voltages and aperture sizes) will most likely be required.
Profiling an application for power consumption during execution on a plurality of compute nodes
Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.
2012-08-21
Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.
An Ultra-Wideband Cross-Correlation Radiometer for Mesoscopic Experiments
NASA Astrophysics Data System (ADS)
Toonen, Ryan; Haselby, Cyrus; Qin, Hua; Eriksson, Mark; Blick, Robert
2007-03-01
We have designed, built and tested a cross-correlation radiometer for detecting statistical order in the quantum fluctuations of mesoscopic experiments at sub-Kelvin temperatures. Our system utilizes a fully analog front-end--operating over the X- and Ku-bands (8 to 18 GHz)--for computing the cross-correlation function. Digital signal processing techniques are used to provide robustness against instrumentation drifts and offsets. The economized version of our instrument can measure, with sufficient correlation efficiency, noise signals having power levels as low as 10 fW. We show that, if desired, we can improve this performance by including cryogenic preamplifiers which boost the signal-to-noise ratio near the signal source. By adding a few extra components, we can measure both the real and imaginary parts of the cross-correlation function--improving the overall signal-to-noise ratio by a factor of sqrt[2]. We demonstrate the utility of our cross-correlator with noise power measurements from a quantum point contact.
A post-processing algorithm for time domain pitch trackers
NASA Astrophysics Data System (ADS)
Specker, P.
1983-01-01
This paper describes a powerful post-processing algorithm for time-domain pitch trackers. On two successive passes, the post-processing algorithm eliminates errors produced during a first pass by a time-domain pitch tracker. During the second pass, incorrect pitch values are detected as outliers by computing the distribution of values over a sliding 80 msec window. During the third pass (based on artificial intelligence techniques), remaining pitch pulses are used as anchor points to reconstruct the pitch train from the original waveform. The algorithm produced a decrease in the error rate from 21% obtained with the original time domain pitch tracker to 2% for isolated words and sentences produced in an office environment by 3 male and 3 female talkers. In a noisy computer room errors decreased from 52% to 2.9% for the same stimuli produced by 2 male talkers. The algorithm is efficient, accurate, and resistant to noise. The fundamental frequency micro-structure is tracked sufficiently well to be used in extracting phonetic features in a feature-based recognition system.
Evaluation of Ten Methods for Initializing a Land Surface Model
NASA Technical Reports Server (NTRS)
Rodell, M.; Houser, P. R.; Berg, A. A.; Famiglietti, J. S.
2005-01-01
Land surface models (LSMs) are computer programs, similar to weather and climate prediction models, which simulate the stocks and fluxes of water (including soil moisture, snow, evaporation, and runoff) and energy (including the temperature of and sensible heat released from the soil) after they arrive on the land surface as precipitation and sunlight. It is not currently possible to measure all of the variables of interest everywhere on Earth with sufficient accuracy and space-time resolution. Hence LSMs have been developed to integrate the available observations with our understanding of the physical processes involved, using powerful computers, in order to map these stocks and fluxes as they change in time. The maps are used to improve weather forecasts, support water resources and agricultural applications, and study the Earth"s water cycle and climate variability. NASA"s Global Land Data Assimilation System (GLDAS) project facilitates testing of several different LSMs with a variety of input datasets (e.g., precipitation, plant type).
A Software Development Platform for Wearable Medical Applications.
Zhang, Ruikai; Lin, Wei
2015-10-01
Wearable medical devices have become a leading trend in healthcare industry. Microcontrollers are computers on a chip with sufficient processing power and preferred embedded computing units in those devices. We have developed a software platform specifically for the design of the wearable medical applications with a small code footprint on the microcontrollers. It is supported by the open source real time operating system FreeRTOS and supplemented with a set of standard APIs for the architectural specific hardware interfaces on the microcontrollers for data acquisition and wireless communication. We modified the tick counter routine in FreeRTOS to include a real time soft clock. When combined with the multitasking features in the FreeRTOS, the platform offers the quick development of wearable applications and easy porting of the application code to different microprocessors. Test results have demonstrated that the application software developed using this platform are highly efficient in CPU usage while maintaining a small code foot print to accommodate the limited memory space in microcontrollers.
Reducing power consumption during execution of an application on a plurality of compute nodes
Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.
2013-09-10
Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.
Matching of energetic, mechanic and control characteristics of positioning actuator
NASA Astrophysics Data System (ADS)
Y Nosova, N.; Misyurin, S. Yu; Kreinin, G. V.
2017-12-01
The problem of preliminary choice of parameters of the automated drive power channel is discussed. The drive of the mechatronic complex divides into two main units - power and control. The first determines the energy capabilities and, as a rule, the overall dimensions of the complex. The sufficient capacity of the power unit is a necessary condition for successful solution of control tasks without excessive complication of the control system structure. Preliminary selection of parameters is carried out based on the condition of providing the necessary drive power. The proposed approach is based on: a research of a sufficiently developed but not excessive dynamic model of the power block with the help of a conditional test control system; a transition to a normalized model with the formation of similarity criteria; constructing the synthesis procedure.
Artificial life and the Chinese room argument.
Anderson, David; Copeland, B Jack
2002-01-01
"Strong artificial life" refers to the thesis that a sufficiently sophisticated computer simulation of a life form is a life form in its own right. Can John Searle's Chinese room argument [12]-originally intended by him to show that the thesis he dubs "strong AI" is false-be deployed against strong ALife? We have often encountered the suggestion that it can be (even in print; see Harnad [8]). We do our best to transfer the argument from the domain of AI to that of ALife. We do so in order to show once and for all that the Chinese room argument proves nothing about ALife. There may indeed be powerful philosophical objections to the thesis of strong ALife, but the Chinese room argument is not among them.
NASA Technical Reports Server (NTRS)
1987-01-01
The United States and other countries face the problem of waste disposal in an economical, environmentally safe manner. A widely applied solution adopted by Americans is "waste to energy," incinerating the refuse and using the steam produced by trash burning to drive an electricity producing generator. NASA's computer program PRESTO II, (Performance of Regenerative Superheated Steam Turbine Cycles), provides power engineering companies, including Blount Energy Resources Corporation of Alabama, with the ability to model such features as process steam extraction, induction and feedwater heating by external sources, peaking and high back pressure. Expansion line efficiency, exhaust loss, leakage, mechanical losses and generator losses are used to calculate the cycle heat rate. The generator output program is sufficiently precise that it can be used to verify performance quoted in turbine generator supplier's proposals.
Kim, Dae-Hyeong; Lu, Nanshu; Ma, Rui; Kim, Yun-Soung; Kim, Rak-Hwan; Wang, Shuodao; Wu, Jian; Won, Sang Min; Tao, Hu; Islam, Ahmad; Yu, Ki Jun; Kim, Tae-il; Chowdhury, Raeed; Ying, Ming; Xu, Lizhi; Li, Ming; Chung, Hyun-Joong; Keum, Hohyun; McCormick, Martin; Liu, Ping; Zhang, Yong-Wei; Omenetto, Fiorenzo G; Huang, Yonggang; Coleman, Todd; Rogers, John A
2011-08-12
We report classes of electronic systems that achieve thicknesses, effective elastic moduli, bending stiffnesses, and areal mass densities matched to the epidermis. Unlike traditional wafer-based technologies, laminating such devices onto the skin leads to conformal contact and adequate adhesion based on van der Waals interactions alone, in a manner that is mechanically invisible to the user. We describe systems incorporating electrophysiological, temperature, and strain sensors, as well as transistors, light-emitting diodes, photodetectors, radio frequency inductors, capacitors, oscillators, and rectifying diodes. Solar cells and wireless coils provide options for power supply. We used this type of technology to measure electrical activity produced by the heart, brain, and skeletal muscles and show that the resulting data contain sufficient information for an unusual type of computer game controller.
Recurrence Density Enhanced Complex Networks for Nonlinear Time Series Analysis
NASA Astrophysics Data System (ADS)
Costa, Diego G. De B.; Reis, Barbara M. Da F.; Zou, Yong; Quiles, Marcos G.; Macau, Elbert E. N.
We introduce a new method, which is entitled Recurrence Density Enhanced Complex Network (RDE-CN), to properly analyze nonlinear time series. Our method first transforms a recurrence plot into a figure of a reduced number of points yet preserving the main and fundamental recurrence properties of the original plot. This resulting figure is then reinterpreted as a complex network, which is further characterized by network statistical measures. We illustrate the computational power of RDE-CN approach by time series by both the logistic map and experimental fluid flows, which show that our method distinguishes different dynamics sufficiently well as the traditional recurrence analysis. Therefore, the proposed methodology characterizes the recurrence matrix adequately, while using a reduced set of points from the original recurrence plots.
Conformal piezoelectric energy harvesting and storage from motions of the heart, lung, and diaphragm
Dagdeviren, Canan; Yang, Byung Duk; Su, Yewang; Tran, Phat L.; Joe, Pauline; Anderson, Eric; Xia, Jing; Doraiswamy, Vijay; Dehdashti, Behrooz; Feng, Xue; Lu, Bingwei; Poston, Robert; Khalpey, Zain; Ghaffari, Roozbeh; Huang, Yonggang; Slepian, Marvin J.; Rogers, John A.
2014-01-01
Here, we report advanced materials and devices that enable high-efficiency mechanical-to-electrical energy conversion from the natural contractile and relaxation motions of the heart, lung, and diaphragm, demonstrated in several different animal models, each of which has organs with sizes that approach human scales. A cointegrated collection of such energy-harvesting elements with rectifiers and microbatteries provides an entire flexible system, capable of viable integration with the beating heart via medical sutures and operation with efficiencies of ∼2%. Additional experiments, computational models, and results in multilayer configurations capture the key behaviors, illuminate essential design aspects, and offer sufficient power outputs for operation of pacemakers, with or without battery assist. PMID:24449853
NASA Astrophysics Data System (ADS)
Kim, Dae-Hyeong; Lu, Nanshu; Ma, Rui; Kim, Yun-Soung; Kim, Rak-Hwan; Wang, Shuodao; Wu, Jian; Won, Sang Min; Tao, Hu; Islam, Ahmad; Yu, Ki Jun; Kim, Tae-il; Chowdhury, Raeed; Ying, Ming; Xu, Lizhi; Li, Ming; Chung, Hyun-Joong; Keum, Hohyun; McCormick, Martin; Liu, Ping; Zhang, Yong-Wei; Omenetto, Fiorenzo G.; Huang, Yonggang; Coleman, Todd; Rogers, John A.
2011-08-01
We report classes of electronic systems that achieve thicknesses, effective elastic moduli, bending stiffnesses, and areal mass densities matched to the epidermis. Unlike traditional wafer-based technologies, laminating such devices onto the skin leads to conformal contact and adequate adhesion based on van der Waals interactions alone, in a manner that is mechanically invisible to the user. We describe systems incorporating electrophysiological, temperature, and strain sensors, as well as transistors, light-emitting diodes, photodetectors, radio frequency inductors, capacitors, oscillators, and rectifying diodes. Solar cells and wireless coils provide options for power supply. We used this type of technology to measure electrical activity produced by the heart, brain, and skeletal muscles and show that the resulting data contain sufficient information for an unusual type of computer game controller.
Estimating Bias Error Distributions
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Finley, Tom D.
2001-01-01
This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.
Anomalous Diffraction in Crystallographic Phase Evaluation
Hendrickson, Wayne A.
2014-01-01
X-ray diffraction patterns from crystals of biological macromolecules contain sufficient information to define atomic structures, but atomic positions are inextricable without having electron-density images. Diffraction measurements provide amplitudes, but the computation of electron density also requires phases for the diffracted waves. The resonance phenomenon known as anomalous scattering offers a powerful solution to this phase problem. Exploiting scattering resonances from diverse elements, the methods of multiwavelength anomalous diffraction (MAD) and single-wavelength anomalous diffraction (SAD) now predominate for de novo determinations of atomic-level biological structures. This review describes the physical underpinnings of anomalous diffraction methods, the evolution of these methods to their current maturity, the elements, procedures and instrumentation used for effective implementation, and the realm of applications. PMID:24726017
Passive dendrites enable single neurons to compute linearly non-separable functions.
Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris
2013-01-01
Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions.
Passive Dendrites Enable Single Neurons to Compute Linearly Non-separable Functions
Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris
2013-01-01
Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions. PMID:23468600
Reducing power consumption during execution of an application on a plurality of compute nodes
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-06-05
Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.
5 CFR 838.805 - OPM computation of formulas in computing the designated base.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false OPM computation of formulas in computing... computing the designated base. (a) A court order awarding a former spouse survivor annuity is not a court...) To provide sufficient instructions and information for OPM to compute the amount of a former spouse...
NASA Technical Reports Server (NTRS)
Martin, Ken E.; Esztergalyos, J.
1992-01-01
The Bonneville Power Administration (BPA) uses IRIG-B transmitted over microwave as its primary system time dissemination. Problems with accuracy and reliability have led to ongoing research into better methods. BPA has also developed and deployed a unique fault locator which uses precise clocks synchronized by a pulse over microwaves. It automatically transmits the data to a central computer for analysis. A proposed system could combine fault location timing and time dissemination into a Global Position System (GPS) timing receiver and close the verification loop through a master station at the Dittmer Control Center. Such a system would have many advantages, including lower cost, higher reliability, and wider industry support. Test results indicate the GPS has sufficient accuracy and reliability for this and other current timing requirements including synchronous phase angle measurements. A phasor measurement system which provides phase angle has recently been tested with excellent results. Phase angle is a key parameter in power system control applications including dynamic braking, DC modulation, remedial action schemes, and system state estimation. Further research is required to determine the applications which can most effectively use real-time phase angle measurements and the best method to apply them.
NASA Astrophysics Data System (ADS)
Martin, Ken E.; Esztergalyos, J.
1992-07-01
The Bonneville Power Administration (BPA) uses IRIG-B transmitted over microwave as its primary system time dissemination. Problems with accuracy and reliability have led to ongoing research into better methods. BPA has also developed and deployed a unique fault locator which uses precise clocks synchronized by a pulse over microwaves. It automatically transmits the data to a central computer for analysis. A proposed system could combine fault location timing and time dissemination into a Global Position System (GPS) timing receiver and close the verification loop through a master station at the Dittmer Control Center. Such a system would have many advantages, including lower cost, higher reliability, and wider industry support. Test results indicate the GPS has sufficient accuracy and reliability for this and other current timing requirements including synchronous phase angle measurements. A phasor measurement system which provides phase angle has recently been tested with excellent results. Phase angle is a key parameter in power system control applications including dynamic braking, DC modulation, remedial action schemes, and system state estimation. Further research is required to determine the applications which can most effectively use real-time phase angle measurements and the best method to apply them.
Lasers with intra-cavity phase elements
NASA Astrophysics Data System (ADS)
Gulses, A. Alkan; Kurtz, Russell; Islas, Gabriel; Anisimov, Igor
2018-02-01
Conventional laser resonators yield multimodal output, especially at high powers and short cavity lengths. Since highorder modes exhibit large divergence, it is desirable to suppress them to improve laser quality. Traditionally, such modal discriminations can be achieved by simple apertures that provide absorptive loss for large diameter modes, while allowing the lower orders, such as the fundamental Gaussian, to pass through. However, modal discrimination may not be sufficient for short-cavity lasers, resulting in multimodal operation as well as power loss and overheating in the absorptive part of the aperture. In research to improve laser mode control with minimal energy loss, systematic experiments have been executed using phase-only elements. These were composed of an intra-cavity step function and a diffractive out-coupler made of a computer-generated hologram. The platform was a 15-cm long solid-state laser that employs a neodymium-doped yttrium orthovanadate crystal rod, producing 1064 nm multimodal laser output. The intra-cavity phase elements (PEs) were shown to be highly effective in obtaining beams with reduced M-squared values and increased output powers, yielding improved values of radiance. The utilization of more sophisticated diffractive elements is promising for more difficult laser systems.
High performance 3D adaptive filtering for DSP based portable medical imaging systems
NASA Astrophysics Data System (ADS)
Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark
2015-03-01
Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.
Computation of shear-induced collective-diffusivity in emulsions
NASA Astrophysics Data System (ADS)
Malipeddi, Abhilash Reddy; Sarkar, Kausik
2017-11-01
The shear-induced collective-diffusivity of drops in an emulsion is calculated through simulation. A front-tracking finite difference method is used to integrate the Navier-Stokes equations. When a cloud of drops is subjected to shear flow, after a certain time, the width of the cloud increases with the 1/3 power of time. This scaling of drop-cloud-width with time is characteristic of (sub-)diffusion that arises from irreversible two-drop interactions. The collective diffusivity is calculated from this relationship. A feature of the procedure adopted here is the modest computational requirement, wherein, a few drops ( 70) in shear for short time ( 70 strain) is found to be sufficient to get a good estimate. As far as we know, collective-diffusivity has not been calculated for drops through simulation till now. The computed values match with experimental measurements reported in the literature. The diffusivity in emulsions is calculated for a range of Capillary (Ca) and Reynolds (Re) numbers. It is found to be a unimodal function of Ca , similar to self-diffusivity. A sub-linear increase of the diffusivity with Re is seen for Re < 5 . This work has been limited to a viscosity matched case.
Launch of the I13-2 data beamline at the Diamond Light Source synchrotron
NASA Astrophysics Data System (ADS)
Bodey, A. J.; Rau, C.
2017-06-01
Users of the Diamond-Manchester Imaging Branchline I13-2 commonly spend many months analysing the large volumes of tomographic data generated in a single beamtime. This is due to the difficulties inherent in performing complicated, computationally-expensive analyses on large datasets with workstations of limited computing power. To improve productivity, a ‘data beamline’ was launched in January 2016. Users are scheduled for visits to the data beamline in the same way as for regular beamlines, with bookings made via the User Administration System and provision of financial support for travel and subsistence. Two high-performance graphics workstations were acquired, with sufficient RAM to enable simultaneous analysis of several tomographic volumes. Users are given high priority on Diamond’s central computing cluster for the duration of their visit, and if necessary, archived data are restored to a high-performance disk array. Within the first six months of operation, thirteen user visits were made, lasting an average of 4.5 days each. The I13-2 data beamline was the first to be launched at Diamond Light Source and, to the authors’ knowledge, the first to be formalised in this way at any synchrotron.
A Pythonic Approach for Computational Geosciences and Geo-Data Processing
NASA Astrophysics Data System (ADS)
Morra, G.; Yuen, D. A.; Lee, S. M.
2016-12-01
Computational methods and data analysis play a constantly increasing role in Earth Sciences however students and professionals need to climb a steep learning curve before reaching a sufficient level that allows them to run effective models. Furthermore the recent arrival and new powerful machine learning tools such as Torch and Tensor Flow has opened new possibilities but also created a new realm of complications related to the completely different technology employed. We present here a series of examples entirely written in Python, a language that combines the simplicity of Matlab with the power and speed of compiled languages such as C, and apply them to a wide range of geological processes such as porous media flow, multiphase fluid-dynamics, creeping flow and many-faults interaction. We also explore ways in which machine learning can be employed in combination with numerical modelling. From immediately interpreting a large number of modeling results to optimizing a set of modeling parameters to obtain a desired optimal simulation. We show that by using Python undergraduate and graduate can learn advanced numerical technologies with a minimum dedicated effort, which in turn encourages them to develop more numerical tools and quickly progress in their computational abilities. We also show how Python allows combining modeling with machine learning as pieces of LEGO, therefore simplifying the transition towards a new kind of scientific geo-modelling. The conclusion is that Python is an ideal tool to create an infrastructure for geosciences that allows users to quickly develop tools, reuse techniques and encourage collaborative efforts to interpret and integrate geo-data in profound new ways.
Next Generation Seismic Imaging; High Fidelity Algorithms and High-End Computing
NASA Astrophysics Data System (ADS)
Bevc, D.; Ortigosa, F.; Guitton, A.; Kaelin, B.
2007-05-01
The rich oil reserves of the Gulf of Mexico are buried in deep and ultra-deep waters up to 30,000 feet from the surface. Minerals Management Service (MMS), the federal agency in the U.S. Department of the Interior that manages the nation's oil, natural gas and other mineral resources on the outer continental shelf in federal offshore waters, estimates that the Gulf of Mexico holds 37 billion barrels of "undiscovered, conventionally recoverable" oil, which, at 50/barrel, would be worth approximately 1.85 trillion. These reserves are very difficult to find and reach due to the extreme depths. Technological advances in seismic imaging represent an opportunity to overcome this obstacle by providing more accurate models of the subsurface. Among these technological advances, Reverse Time Migration (RTM) yields the best possible images. RTM is based on the solution of the two-way acoustic wave-equation. This technique relies on the velocity model to image turning waves. These turning waves are particularly important to unravel subsalt reservoirs and delineate salt-flanks, a natural trap for oil and gas. Because it relies on an accurate velocity model, RTM opens new frontier in designing better velocity estimation algorithms. RTM has been widely recognized as the next chapter in seismic exploration, as it can overcome the limitations of current migration methods in imaging complex geologic structures that exist in the Gulf of Mexico. The chief impediment to the large-scale, routine deployment of RTM has been a lack of sufficient computer power. RTM needs thirty times the computing power used in exploration today to be commercially viable and widely usable. Therefore, advancing seismic imaging to the next level of precision poses a multi-disciplinary challenge. To overcome these challenges, the Kaleidoscope project, a partnership between Repsol YPF, Barcelona Supercomputing Center, 3DGeo Inc., and IBM brings together the necessary components of modeling, algorithms and the uniquely powerful computing power of the MareNostrum supercomputer in Barcelona to realize the promise of RTM, incorporate it into daily processing flows, and to help solve exploration problems in a highly cost-effective way. Uniquely, the Kaleidoscope Project is simultaneously integrating software (algorithms) and hardware (Cell BE), steps that are traditionally taken sequentially. This unique integration of software and hardware will accelerate seismic imaging by several orders of magnitude compared to conventional solutions running on standard Linux Clusters.
47 CFR 80.1015 - Power supply.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Power supply. 80.1015 Section 80.1015... MARITIME SERVICES Radiotelephone Installations Required by the Bridge-to-Bridge Act § 80.1015 Power supply. (a) There must be readily available for use under normal load conditions, a power supply sufficient...
47 CFR 80.1015 - Power supply.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Power supply. 80.1015 Section 80.1015... MARITIME SERVICES Radiotelephone Installations Required by the Bridge-to-Bridge Act § 80.1015 Power supply. (a) There must be readily available for use under normal load conditions, a power supply sufficient...
Avaricious and Envious: Confessions of a Computer-Literate Educator.
ERIC Educational Resources Information Center
Burniske, R. W.
2001-01-01
How did educators become enslaved to networked computers and mesmerized by iridescent screens? The computer encourages endless acquisitions, some motivated by intellectual avarice, others by petty jealousies incited by colleagues raving about the latest "innovation." How much computer literacy is sufficient? What other literacies must…
Self sufficient wireless transmitter powered by foot-pumped urine operating wearable MFC.
Taghavi, M; Stinchcombe, A; Greenman, J; Mattoli, V; Beccai, L; Mazzolai, B; Melhuish, C; Ieropoulos, I A
2015-12-10
The first self-sufficient system, powered by a wearable energy generator based on microbial fuel cell (MFC) technology is introduced. MFCs made from compliant material were developed in the frame of a pair of socks, which was fed by urine via a manual gaiting pump. The simple and single loop cardiovascular fish circulatory system was used as the inspiration for the design of the manual pump. A wireless programmable communication module, engineered to operate within the range of the generated electricity, was employed, which opens a new avenue for research in the utilisation of waste products for powering portable as well as wearable electronics.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-01-10
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-04-17
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
47 CFR 80.875 - VHF radiotelephone power supply.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false VHF radiotelephone power supply. 80.875 Section... to Subpart W § 80.875 VHF radiotelephone power supply. (a) There must be readily available for use under normal load conditions a power supply sufficient to simultaneously energize the VHF transmitter at...
47 CFR 80.875 - VHF radiotelephone power supply.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false VHF radiotelephone power supply. 80.875 Section... to Subpart W § 80.875 VHF radiotelephone power supply. (a) There must be readily available for use under normal load conditions a power supply sufficient to simultaneously energize the VHF transmitter at...
Modeling Human-Computer Decision Making with Covariance Structure Analysis.
ERIC Educational Resources Information Center
Coovert, Michael D.; And Others
Arguing that sufficient theory exists about the interplay between human information processing, computer systems, and the demands of various tasks to construct useful theories of human-computer interaction, this study presents a structural model of human-computer interaction and reports the results of various statistical analyses of this model.…
Securing the Data Storage and Processing in Cloud Computing Environment
ERIC Educational Resources Information Center
Owens, Rodney
2013-01-01
Organizations increasingly utilize cloud computing architectures to reduce costs and energy consumption both in the data warehouse and on mobile devices by better utilizing the computing resources available. However, the security and privacy issues with publicly available cloud computing infrastructures have not been studied to a sufficient depth…
A Real-Time Capable Software-Defined Receiver Using GPU for Adaptive Anti-Jam GPS Sensors
Seo, Jiwon; Chen, Yu-Hsuan; De Lorenzo, David S.; Lo, Sherman; Enge, Per; Akos, Dennis; Lee, Jiyun
2011-01-01
Due to their weak received signal power, Global Positioning System (GPS) signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs). However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR) with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU) coupled with a new generation Graphics Processing Unit (GPU) having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities. PMID:22164116
A real-time capable software-defined receiver using GPU for adaptive anti-jam GPS sensors.
Seo, Jiwon; Chen, Yu-Hsuan; De Lorenzo, David S; Lo, Sherman; Enge, Per; Akos, Dennis; Lee, Jiyun
2011-01-01
Due to their weak received signal power, Global Positioning System (GPS) signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs). However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR) with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU) coupled with a new generation Graphics Processing Unit (GPU) having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities.
Multiple Cylinder Free-Piston Stirling Machinery
NASA Astrophysics Data System (ADS)
Berchowitz, David M.; Kwon, Yong-Rak
In order to improve the specific power of piston-cylinder type machinery, there is a point in capacity or power where an advantage accrues with increasing number of piston-cylinder assemblies. In the case of Stirling machinery where primary energy is transferred across the casing wall of the machine, this consideration is even more important. This is due primarily to the difference in scaling of basic power and the required heat transfer. Heat transfer is found to be progressively limited as the size of the machine increases. Multiple cylinder machines tend to preserve the surface area to volume ratio at more favorable levels. In addition, the spring effect of the working gas in the so-called alpha configuration is often sufficient to provide a high frequency resonance point that improves the specific power. There are a number of possible multiple cylinder configurations. The simplest is an opposed pair of piston-displacer machines (beta configuration). A three-cylinder machine requires stepped pistons to obtain proper volume phase relationships. Four to six cylinder configurations are also possible. A small demonstrator inline four cylinder alpha machine has been built to demonstrate both cooling operation and power generation. Data from this machine verifies theoretical expectations and is used to extrapolate the performance of future machines. Vibration levels are discussed and it is argued that some multiple cylinder machines have no linear component to the casing vibration but may have a nutating couple. Example applications are discussed ranging from general purpose coolers, computer cooling, exhaust heat power extraction and some high power engines.
Universal quantum gates for Single Cooper Pair Box based quantum computing
NASA Technical Reports Server (NTRS)
Echternach, P.; Williams, C. P.; Dultz, S. C.; Braunstein, S.; Dowling, J. P.
2000-01-01
We describe a method for achieving arbitrary 1-qubit gates and controlled-NOT gates within the context of the Single Cooper Pair Box (SCB) approach to quantum computing. Such gates are sufficient to support universal quantum computation.
Budget-based power consumption for application execution on a plurality of compute nodes
Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E
2013-02-05
Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.
Budget-based power consumption for application execution on a plurality of compute nodes
Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D
2012-10-23
Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.
7 CFR 1421.401 - DMA responsibilities.
Code of Federal Regulations, 2013 CFR
2013-01-01
... peanut MAL, and LDP program training offered by CCC. (4) Provide sufficient personnel, computer hardware, computer communications systems, and software, as determined necessary by CCC, to administer the peanut MAL...
7 CFR 1421.401 - DMA responsibilities.
Code of Federal Regulations, 2014 CFR
2014-01-01
... peanut MAL, and LDP program training offered by CCC. (4) Provide sufficient personnel, computer hardware, computer communications systems, and software, as determined necessary by CCC, to administer the peanut MAL...
7 CFR 1421.401 - DMA responsibilities.
Code of Federal Regulations, 2012 CFR
2012-01-01
... peanut MAL, and LDP program training offered by CCC. (4) Provide sufficient personnel, computer hardware, computer communications systems, and software, as determined necessary by CCC, to administer the peanut MAL...
Software Assurance Curriculum Project Volume 4: Community College Education
2011-09-01
no previous programming or computer science experience expected) • Precalculus -ready (that is, proficiency sufficient to enter college-level... precalculus course) • English Composition I-ready (that is, proficiency sufficient to enter college-level English I course) Co-Requisite Discrete
33 CFR 385.26 - Project Implementation Reports.
Code of Federal Regulations, 2014 CFR
2014-07-01
... available science; (iii) Comply with all applicable Federal, State, and Tribal laws; (iv) Contain sufficient... boundary of regional computer models or projects whose effects cannot be captured in regional computer...
33 CFR 385.26 - Project Implementation Reports.
Code of Federal Regulations, 2011 CFR
2011-07-01
... available science; (iii) Comply with all applicable Federal, State, and Tribal laws; (iv) Contain sufficient... boundary of regional computer models or projects whose effects cannot be captured in regional computer...
33 CFR 385.26 - Project Implementation Reports.
Code of Federal Regulations, 2012 CFR
2012-07-01
... available science; (iii) Comply with all applicable Federal, State, and Tribal laws; (iv) Contain sufficient... boundary of regional computer models or projects whose effects cannot be captured in regional computer...
33 CFR 385.26 - Project Implementation Reports.
Code of Federal Regulations, 2013 CFR
2013-07-01
... available science; (iii) Comply with all applicable Federal, State, and Tribal laws; (iv) Contain sufficient... boundary of regional computer models or projects whose effects cannot be captured in regional computer...
Geerts, Hugo; Kennis, Ludo
2014-01-01
Clinical development in brain diseases has one of the lowest success rates in the pharmaceutical industry, and many promising rationally designed single-target R&D projects fail in expensive Phase III trials. By contrast, successful older CNS drugs do have a rich pharmacology. This article will provide arguments suggesting that highly selective single-target drugs are not sufficiently powerful to restore complex neuronal circuit homeostasis. A rationally designed multitarget project can be derisked by dialing in an additional symptomatic treatment effect on top of a disease modification target. Alternatively, we expand upon a hypothetical workflow example using a humanized computer-based quantitative systems pharmacology platform. The hope is that incorporating rationally multipharmacology drug discovery could potentially lead to more impactful polypharmacy drugs.
Polynomial Monogamy Relations for Entanglement Negativity.
Allen, Grant W; Meyer, David A
2017-02-24
The notion of nonclassical correlations is a powerful contrivance for explaining phenomena exhibited in quantum systems. It is well known, however, that quantum systems are not free to explore arbitrary correlations-the church of the smaller Hilbert space only accepts monogamous congregants. We demonstrate how to characterize the limits of what is quantum mechanically possible with a computable measure, entanglement negativity. We show that negativity only saturates the standard linear monogamy inequality in trivial cases implied by its monotonicity under local operations and classical communication, and derive a necessary and sufficient inequality which, for the first time, is a nonlinear higher degree polynomial. For very large quantum systems, we prove that the negativity can be distributed at least linearly for the tightest constraint and conjecture that it is at most linear.
Polynomial Monogamy Relations for Entanglement Negativity
NASA Astrophysics Data System (ADS)
Allen, Grant W.; Meyer, David A.
2017-02-01
The notion of nonclassical correlations is a powerful contrivance for explaining phenomena exhibited in quantum systems. It is well known, however, that quantum systems are not free to explore arbitrary correlations—the church of the smaller Hilbert space only accepts monogamous congregants. We demonstrate how to characterize the limits of what is quantum mechanically possible with a computable measure, entanglement negativity. We show that negativity only saturates the standard linear monogamy inequality in trivial cases implied by its monotonicity under local operations and classical communication, and derive a necessary and sufficient inequality which, for the first time, is a nonlinear higher degree polynomial. For very large quantum systems, we prove that the negativity can be distributed at least linearly for the tightest constraint and conjecture that it is at most linear.
Analyzing the Impacts of Increased Wind Power on Generation Revenue Sufficiency: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qin; Wu, Hongyu; Tan, Jin
2016-08-01
The Revenue Sufficiency Guarantee (RSG), as part of make-whole (or uplift) payments in electricity markets, is designed to recover the generation resources' offer-based production costs that are not otherwise covered by their market revenues. Increased penetrations of wind power will bring significant impacts to the RSG payments in the markets. However, literature related to this topic is sparse. This paper first reviews the industrial practices of implementing RSG in major U.S. independent system operators (ISOs) and regional transmission operators (RTOs) and then develops a general RSG calculation method. Finally, an 18-bus test system is adopted to demonstrate the impacts ofmore » increased wind power on RSG payments.« less
Analyzing the Impacts of Increased Wind Power on Generation Revenue Sufficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qin; Wu, Hongyu; Tan, Jin
2016-11-14
The Revenue Sufficiency Guarantee (RSG), as part of make-whole (or uplift) payments in electricity markets, is designed to recover the generation resources' offer-based production costs that are not otherwise covered by their market revenues. Increased penetrations of wind power will bring significant impacts to the RSG payments in the markets. However, literature related to this topic is sparse. This paper first reviews the industrial practices of implementing RSG in major U.S. independent system operators (ISOs) and regional transmission operators (RTOs) and then develops a general RSG calculation method. Finally, an 18-bus test system is adopted to demonstrate the impacts ofmore » increased wind power on RSG payments.« less
NASA Astrophysics Data System (ADS)
Ford, Eric B.; Dindar, Saleh; Peters, Jorg
2015-08-01
The realism of astrophysical simulations and statistical analyses of astronomical data are set by the available computational resources. Thus, astronomers and astrophysicists are constantly pushing the limits of computational capabilities. For decades, astronomers benefited from massive improvements in computational power that were driven primarily by increasing clock speeds and required relatively little attention to details of the computational hardware. For nearly a decade, increases in computational capabilities have come primarily from increasing the degree of parallelism, rather than increasing clock speeds. Further increases in computational capabilities will likely be led by many-core architectures such as Graphical Processing Units (GPUs) and Intel Xeon Phi. Successfully harnessing these new architectures, requires significantly more understanding of the hardware architecture, cache hierarchy, compiler capabilities and network network characteristics.I will provide an astronomer's overview of the opportunities and challenges provided by modern many-core architectures and elastic cloud computing. The primary goal is to help an astronomical audience understand what types of problems are likely to yield more than order of magnitude speed-ups and which problems are unlikely to parallelize sufficiently efficiently to be worth the development time and/or costs.I will draw on my experience leading a team in developing the Swarm-NG library for parallel integration of large ensembles of small n-body systems on GPUs, as well as several smaller software projects. I will share lessons learned from collaborating with computer scientists, including both technical and soft skills. Finally, I will discuss the challenges of training the next generation of astronomers to be proficient in this new era of high-performance computing, drawing on experience teaching a graduate class on High-Performance Scientific Computing for Astrophysics and organizing a 2014 advanced summer school on Bayesian Computing for Astronomical Data Analysis with support of the Penn State Center for Astrostatistics and Institute for CyberScience.
NASA Technical Reports Server (NTRS)
Duque, Earl P. N.; Johnson, Wayne; vanDam, C. P.; Chao, David D.; Cortes, Regina; Yee, Karen
1999-01-01
Accurate, reliable and robust numerical predictions of wind turbine rotor power remain a challenge to the wind energy industry. The literature reports various methods that compare predictions to experiments. The methods vary from Blade Element Momentum Theory (BEM), Vortex Lattice (VL), to variants of Reynolds-averaged Navier-Stokes (RaNS). The BEM and VL methods consistently show discrepancies in predicting rotor power at higher wind speeds mainly due to inadequacies with inboard stall and stall delay models. The RaNS methodologies show promise in predicting blade stall. However, inaccurate rotor vortex wake convection, boundary layer turbulence modeling and grid resolution has limited their accuracy. In addition, the inherently unsteady stalled flow conditions become computationally expensive for even the best endowed research labs. Although numerical power predictions have been compared to experiment. The availability of good wind turbine data sufficient for code validation experimental data that has been extracted from the IEA Annex XIV download site for the NREL Combined Experiment phase II and phase IV rotor. In addition, the comparisons will show data that has been further reduced into steady wind and zero yaw conditions suitable for comparisons to "steady wind" rotor power predictions. In summary, the paper will present and discuss the capabilities and limitations of the three numerical methods and make available a database of experimental data suitable to help other numerical methods practitioners validate their own work.
Yu, Jun; Shen, Zhengxiang; Sheng, Pengfeng; Wang, Xiaoqiang; Hailey, Charles J; Wang, Zhanshan
2018-03-01
The nested grazing incidence telescope can achieve a large collecting area in x-ray astronomy, with a large number of closely packed, thin conical mirrors. Exploiting the surface metrological data, the ray tracing method used to reconstruct the shell surface topography and evaluate the imaging performance is a powerful tool to assist iterative improvement in the fabrication process. However, current two-dimensional (2D) ray tracing codes, especially when utilized with densely sampled surface shape data, may not provide sufficient accuracy of reconstruction and are computationally cumbersome. In particular, 2D ray tracing currently employed considers coplanar rays and thus simulates only these rays along the meridional plane. This captures axial figure errors but leaves other important errors, such as roundness errors, unaccounted for. We introduce a semianalytic, three-dimensional (3D) ray tracing approach for x-ray optics that overcomes these shortcomings. And the present method is both computationally fast and accurate. We first introduce the principles and the computational details of this 3D ray tracing method. Then the computer simulations of this approach compared to 2D ray tracing are demonstrated, using an ideal conic Wolter-I telescope for benchmarking. Finally, the present 3D ray tracing is used to evaluate the performance of a prototype x-ray telescope fabricated for the enhanced x-ray timing and polarization mission.
NASA Astrophysics Data System (ADS)
Jaranowski, Piotr; Królak, Andrzej
2000-03-01
We develop the analytic and numerical tools for data analysis of the continuous gravitational-wave signals from spinning neutron stars for ground-based laser interferometric detectors. The statistical data analysis method that we investigate is maximum likelihood detection which for the case of Gaussian noise reduces to matched filtering. We study in detail the statistical properties of the optimum functional that needs to be calculated in order to detect the gravitational-wave signal and estimate its parameters. We find it particularly useful to divide the parameter space into elementary cells such that the values of the optimal functional are statistically independent in different cells. We derive formulas for false alarm and detection probabilities both for the optimal and the suboptimal filters. We assess the computational requirements needed to do the signal search. We compare a number of criteria to build sufficiently accurate templates for our data analysis scheme. We verify the validity of our concepts and formulas by means of the Monte Carlo simulations. We present algorithms by which one can estimate the parameters of the continuous signals accurately. We find, confirming earlier work of other authors, that given a 100 Gflops computational power an all-sky search for observation time of 7 days and directed search for observation time of 120 days are possible whereas an all-sky search for 120 days of observation time is computationally prohibitive.
Development of the Tensoral Computer Language
NASA Technical Reports Server (NTRS)
Ferziger, Joel; Dresselhaus, Eliot
1996-01-01
The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.
Description of a MIL-STD-1553B Data Bus Ada Driver for the LeRC EPS Testbed
NASA Technical Reports Server (NTRS)
Mackin, Michael A.
1995-01-01
This document describes the software designed to provide communication between control computers in the NASA Lewis Research Center Electrical Power System Testbed using MIL-STD-1553B. The software drivers are coded in the Ada programming language and were developed on a MSDOS-based computer workstation. The Electrical Power System (EPS) Testbed is a reduced-scale prototype space station electrical power system. The power system manages and distributes electrical power from the sources (batteries or photovoltaic arrays) to the end-user loads. The electrical system primary operates at 120 volts DC, and the secondary system operates at 28 volts DC. The devices which direct the flow of electrical power are controlled by a network of six control computers. Data and control messages are passed between the computers using the MIL-STD-1553B network. One of the computers, the Power Management Controller (PMC), controls the primary power distribution and another, the Load Management Controller (LMC), controls the secondary power distribution. Each of these computers communicates with two other computers which act as subsidiary controllers. These subsidiary controllers are, in turn, connected to the devices which directly control the flow of electrical power.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-02
... in the United States in a sufficient and reasonably available amount or are not of a satisfactory... sufficient and reasonably available amount or are not of a satisfactory quality; (C) rolling stock or power...
Ignition in tokamaks with modulated source of auxiliary heating
NASA Astrophysics Data System (ADS)
Morozov, D. Kh
2017-12-01
It is shown that the ignition may be achieved in tokamaks with the modulated power source. The time-averaged source power may be smaller than the steady-state source power, which is sufficient for the ignition. Nevertheless, the maximal power must be large enough, because the ignition must be achieved within a finite time interval.
Fluidic-thermochromic display device
NASA Technical Reports Server (NTRS)
Grafstein, D.; Hilborn, E. H.
1968-01-01
Fluidic decoder and display device has low-power requirements for temperature control of thermochromic materials. An electro-to-fluid converter translates incoming electrical signals into pneumatics signal of sufficient power to operate the fluidic logic elements.
FEM numerical model study of heating in magnetic nanoparticles
NASA Astrophysics Data System (ADS)
Pearce, John A.; Cook, Jason R.; Hoopes, P. Jack; Giustini, Andrew
2011-03-01
Electromagnetic heating of nanoparticles is complicated by the extremely short thermal relaxation time constants and difficulty of coupling sufficient power into the particles to achieve desired temperatures. Magnetic field heating by the hysteresis loop mechanism at frequencies between about 100 and 300 kHz has proven to be an effective mechanism in magnetic nanoparticles. Experiments at 2.45 GHz show that Fe3O4 magnetite nanoparticle dispersions in the range of 1012 to 1013 NP/mL also heat substantially at this frequency. An FEM numerical model study was undertaken to estimate the order of magnitude of volume power density, Qgen (W m-3) required to achieve significant heating in evenly dispersed and aggregated clusters of nanoparticles. The FEM models were computed using Comsol Multiphysics; consequently the models were confined to continuum formulations and did not include film nano-dimension heat transfer effects at the nanoparticle surface. As an example, the models indicate that for a single 36 nm diameter particle at an equivalent dispersion of 1013 NP/mL located within one control volume (1.0 x 10-19 m3) of a capillary vessel a power density in the neighborhood of 1017 (W m-3) is required to achieve a steady state particle temperature of 52°C - the total power coupled to the particle is 2.44 μW. As a uniformly distributed particle cluster moves farther from the capillary the required power density decreases markedly. Finally, the tendency for particles in vivo to cluster together at separation distances much less than those of the uniform distribution further reduces the required power density.
Power-law expansion of the Universe from the bosonic Lorentzian type IIB matrix model
NASA Astrophysics Data System (ADS)
Ito, Yuta; Nishimura, Jun; Tsuchiya, Asato
2015-11-01
Recent studies on the Lorentzian version of the type IIB matrix model show that (3+1)D expanding universe emerges dynamically from (9+1)D space-time predicted by superstring theory. Here we study a bosonic matrix model obtained by omitting the fermionic matrices. With the adopted simplification and the usage of a large-scale parallel computer, we are able to perform Monte Carlo calculations with matrix size up to N = 512, which is twenty times larger than that used previously for the studies of the original model. When the matrix size is larger than some critical value N c ≃ 110, we find that (3+1)D expanding universe emerges dynamically with a clear large- N scaling property. Furthermore, the observed increase of the spatial extent with time t at sufficiently late times is consistent with a power-law behavior t 1/2, which is reminiscent of the expanding behavior of the Friedmann-Robertson-Walker universe in the radiation dominated era. We discuss possible implications of this result on the original supersymmetric model including fermionic matrices.
Effect of superconducting solenoid model cores on spanwise iron magnet roll control
NASA Technical Reports Server (NTRS)
Britcher, C. P.
1985-01-01
Compared with conventional ferromagnetic fuselage cores, superconducting solenoid cores appear to offer significant reductions in the projected cost of a large wind tunnel magnetic suspension and balance system. The provision of sufficient magnetic roll torque capability has been a long-standing problem with all magnetic suspension and balance systems; and the spanwise iron magnet scheme appears to be the most powerful system available. This scheme utilizes iron cores which are installed in the wings of the model. It was anticipated that the magnetization of these cores, and hence the roll torque generated, would be affected by the powerful external magnetic field of the superconducting solenoid. A preliminary study has been made of the effect of the superconducting solenoid fuselage model core concept on the spanwise iron magnet roll torque generation schemes. Computed data for one representative configuration indicate that reductions in available roll torque occur over a range of applied magnetic field levels. These results indicate that a 30-percent increase in roll electromagnet capacity over that previously determined will be required for a representative 8-foot wind tunnel magnetic suspension and balance system design.
Henderson, Theodore A; Morries, Larry D
2015-01-01
Traumatic brain injury (TBI) is a growing health concern affecting civilians and military personnel. Near-infrared (NIR) light has shown benefits in animal models and human trials for stroke and in animal models for TBI. Diodes emitting low-level NIR often have lacked therapeutic efficacy, perhaps failing to deliver sufficient radiant energy to the necessary depth. In this case report, a patient with moderate TBI documented in anatomical magnetic resonance imaging (MRI) and perfusion single-photon emission computed tomography (SPECT) received 20 NIR treatments in the course of 2 mo using a high-power NIR laser. Symptoms were monitored by clinical examination and a novel patient diary system specifically designed for this patient population. Clinical application of these levels of infrared energy for this patient with TBI yielded highly favorable outcomes with decreased depression, anxiety, headache, and insomnia, whereas cognition and quality of life improved. Neurological function appeared to improve based on changes in the SPECT by quantitative analysis. NIR in the power range of 10-15 W at 810 and 980 nm can safely and effectively treat chronic symptoms of TBI.
Multivariate Welch t-test on distances
2016-01-01
Motivation: Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. Results: We develop a solution in the form of a distance-based Welch t-test, TW2, for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and TW2 in reanalysis of two existing microbiome datasets, where the methodology has originated. Availability and Implementation: The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2. Further guidance on application of these methods can be obtained from the author. Contact: alekseye@musc.edu PMID:27515741
Multivariate Welch t-test on distances.
Alekseyenko, Alexander V
2016-12-01
Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. We develop a solution in the form of a distance-based Welch t-test, [Formula: see text], for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and [Formula: see text] in reanalysis of two existing microbiome datasets, where the methodology has originated. The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2 Further guidance on application of these methods can be obtained from the author. alekseye@musc.edu. © The Author 2016. Published by Oxford University Press.
On Parallelizing Single Dynamic Simulation Using HPC Techniques and APIs of Commercial Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diao, Ruisheng; Jin, Shuangshuang; Howell, Frederic
Time-domain simulations are heavily used in today’s planning and operation practices to assess power system transient stability and post-transient voltage/frequency profiles following severe contingencies to comply with industry standards. Because of the increased modeling complexity, it is several times slower than real time for state-of-the-art commercial packages to complete a dynamic simulation for a large-scale model. With the growing stochastic behavior introduced by emerging technologies, power industry has seen a growing need for performing security assessment in real time. This paper presents a parallel implementation framework to speed up a single dynamic simulation by leveraging the existing stability model librarymore » in commercial tools through their application programming interfaces (APIs). Several high performance computing (HPC) techniques are explored such as parallelizing the calculation of generator current injection, identifying fast linear solvers for network solution, and parallelizing data outputs when interacting with APIs in the commercial package, TSAT. The proposed method has been tested on a WECC planning base case with detailed synchronous generator models and exhibits outstanding scalable performance with sufficient accuracy.« less
Thermoelectric Power Generation System for Future Hybrid Vehicles Using Hot Exhaust Gas
NASA Astrophysics Data System (ADS)
Kim, Sun-Kook; Won, Byeong-Cheol; Rhi, Seok-Ho; Kim, Shi-Ho; Yoo, Jeong-Ho; Jang, Ju-Chan
2011-05-01
The present experimental and computational study investigates a new exhaust gas waste heat recovery system for hybrid vehicles, using a thermoelectric module (TEM) and heat pipes to produce electric power. It proposes a new thermoelectric generation (TEG) system, working with heat pipes to produce electricity from a limited hot surface area. The current TEG system is directly connected to the exhaust pipe, and the amount of electricity generated by the TEMs is directly proportional to their heated area. Current exhaust pipes fail to offer a sufficiently large hot surface area for the high-efficiency waste heat recovery required. To overcome this, a new TEG system has been designed to have an enlarged hot surface area by the addition of ten heat pipes, which act as highly efficient heat transfer devices and can transmit the heat to many TEMs. As designed, this new waste heat recovery system produces a maximum 350 W when the hot exhaust gas heats the evaporator surface of the heat pipe to 170°C; this promises great possibilities for application of this technology in future energy-efficient hybrid vehicles.
Spatial Heterogeneities and Onset of Passivation Breakdown at Lithium Anode Interfaces
Leung, Kevin; Jungjohann, Katherine L.
2017-09-08
Effective passivation of lithium metal surfaces, and prevention of battery-shorting lithium dendrite growth, are critical for implementing lithium metal anodes for batteries with increased power densities. Nanoscale surface heterogeneities can be “hot spots” where anode passivation breaks down. Motivated by the observation of lithium dendrites in pores and grain boundaries in all-solid batteries, we examine lithium metal surfaces covered with Li 2O and/or LiF thin films with grain boundaries in them. Electronic structure calculations show that at >0.25 V computed equilibrium overpotential Li 2O grain boundaries with sufficiently large pores can accommodate Li0 atoms which aid e– leakage and passivationmore » breakdown. Strain often accompanies Li insertion; applying an ~1.7% strain already lowers the computed overpotential to 0.1 V. Lithium metal nanostructures as thin as 12 Å are thermodynamically favored inside cracks in Li 2O films, becoming “incipient lithium filaments”. LiF films are more resistant to lithium metal growth. Finally, the models used herein should in turn inform passivating strategies in all-solid-state batteries.« less
Spatial Heterogeneities and Onset of Passivation Breakdown at Lithium Anode Interfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung, Kevin; Jungjohann, Katherine L.
Effective passivation of lithium metal surfaces, and prevention of battery-shorting lithium dendrite growth, are critical for implementing lithium metal anodes for batteries with increased power densities. Nanoscale surface heterogeneities can be “hot spots” where anode passivation breaks down. Motivated by the observation of lithium dendrites in pores and grain boundaries in all-solid batteries, we examine lithium metal surfaces covered with Li 2O and/or LiF thin films with grain boundaries in them. Electronic structure calculations show that at >0.25 V computed equilibrium overpotential Li 2O grain boundaries with sufficiently large pores can accommodate Li0 atoms which aid e– leakage and passivationmore » breakdown. Strain often accompanies Li insertion; applying an ~1.7% strain already lowers the computed overpotential to 0.1 V. Lithium metal nanostructures as thin as 12 Å are thermodynamically favored inside cracks in Li 2O films, becoming “incipient lithium filaments”. LiF films are more resistant to lithium metal growth. Finally, the models used herein should in turn inform passivating strategies in all-solid-state batteries.« less
NASA Astrophysics Data System (ADS)
Shi, X.
2015-12-01
As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.
"Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2009-01-01
Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ching, Wai-Yim
2014-12-31
Advanced materials with applications in extreme conditions such as high temperature, high pressure, and corrosive environments play a critical role in the development of new technologies to significantly improve the performance of different types of power plants. Materials that are currently employed in fossil energy conversion systems are typically the Ni-based alloys and stainless steels that have already reached their ultimate performance limits. Incremental improvements are unlikely to meet the more stringent requirements aimed at increased efficiency and reduce risks while addressing environmental concerns and keeping costs low. Computational studies can lead the way in the search for novel materialsmore » or for significant improvements in existing materials that can meet such requirements. Detailed computational studies with sufficient predictive power can provide an atomistic level understanding of the key characteristics that lead to desirable properties. This project focuses on the comprehensive study of a new class of materials called MAX phases, or Mn+1AXn (M = a transition metal, A = Al or other group III, IV, and V elements, X = C or N). The MAX phases are layered transition metal carbides or nitrides with a rare combination of metallic and ceramic properties. Due to their unique structural arrangements and special types of bonding, these thermodynamically stable alloys possess some of the most outstanding properties. We used a genomic approach in screening a large number of potential MAX phases and established a database for 665 viable MAX compounds on the structure, mechanical and electronic properties and investigated the correlations between them. This database if then used as a tool for materials informatics for further exploration of this class of intermetallic compounds.« less
Efficient Sample Delay Calculation for 2-D and 3-D Ultrasound Imaging.
Ibrahim, Aya; Hager, Pascal A; Bartolini, Andrea; Angiolini, Federico; Arditi, Marcel; Thiran, Jean-Philippe; Benini, Luca; De Micheli, Giovanni
2017-08-01
Ultrasound imaging is a reference medical diagnostic technique, thanks to its blend of versatility, effectiveness, and moderate cost. The core computation of all ultrasound imaging methods is based on simple formulae, except for those required to calculate acoustic propagation delays with high precision and throughput. Unfortunately, advanced three-dimensional (3-D) systems require the calculation or storage of billions of such delay values per frame, which is a challenge. In 2-D systems, this requirement can be four orders of magnitude lower, but efficient computation is still crucial in view of low-power implementations that can be battery-operated, enabling usage in numerous additional scenarios. In this paper, we explore two smart designs of the delay generation function. To quantify their hardware cost, we implement them on FPGA and study their footprint and performance. We evaluate how these architectures scale to different ultrasound applications, from a low-power 2-D system to a next-generation 3-D machine. When using numerical approximations, we demonstrate the ability to generate delay values with sufficient throughput to support 10 000-channel 3-D imaging at up to 30 fps while using 63% of a Virtex 7 FPGA, requiring 24 MB of external memory accessed at about 32 GB/s bandwidth. Alternatively, with similar FPGA occupation, we show an exact calculation method that reaches 24 fps on 1225-channel 3-D imaging and does not require external memory at all. Both designs can be scaled to use a negligible amount of resources for 2-D imaging in low-power applications and for ultrafast 2-D imaging at hundreds of frames per second.
Proposal for grid computing for nuclear applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.
2014-02-12
The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.
Cross-flow turbines: physical and numerical model studies towards improved array simulations
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2015-12-01
Cross-flow, or vertical-axis turbines, show potential in marine hydrokinetic (MHK) and wind energy applications. As turbine designs mature, the research focus is shifting from individual devices towards improving turbine array layouts for maximizing overall power output, i.e., minimizing wake interference for axial-flow turbines, or taking advantage of constructive wake interaction for cross-flow turbines. Numerical simulations are generally better suited to explore the turbine array design parameter space, as physical model studies of large arrays at large model scale would be expensive. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries, the turbines' interaction with the energy resource needs to be parameterized, or modeled. Most models in use today, e.g. actuator disk, are not able to predict the unique wake structure generated by cross-flow turbines. Experiments were carried out using a high-resolution turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier--Stokes models. The ALM predicts turbine loading with the blade element method combined with sub-models for dynamic stall and flow curvature. The open-source software is written as an extension library for the OpenFOAM CFD package, which allows the ALM body force to be applied to their standard RANS and LES solvers. Turbine forcing is also applied to volume of fluid (VOF) models, e.g., for predicting free surface effects on submerged MHK devices. An additional sub-model is considered for injecting turbulence model scalar quantities based on actuator line element loading. Results are presented for the simulation of performance and wake dynamics of axial- and cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET grant 1150797.
46 CFR 58.05-5 - Astern power.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 2 2010-10-01 2010-10-01 false Astern power. 58.05-5 Section 58.05-5 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING MAIN AND AUXILIARY MACHINERY AND RELATED SYSTEMS Main Propulsion Machinery § 58.05-5 Astern power. (a) All vessels shall have sufficient...
46 CFR 58.05-5 - Astern power.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Astern power. 58.05-5 Section 58.05-5 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING MAIN AND AUXILIARY MACHINERY AND RELATED SYSTEMS Main Propulsion Machinery § 58.05-5 Astern power. (a) All vessels shall have sufficient...
46 CFR 58.05-5 - Astern power.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 2 2011-10-01 2011-10-01 false Astern power. 58.05-5 Section 58.05-5 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING MAIN AND AUXILIARY MACHINERY AND RELATED SYSTEMS Main Propulsion Machinery § 58.05-5 Astern power. (a) All vessels shall have sufficient...
46 CFR 58.05-5 - Astern power.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Astern power. 58.05-5 Section 58.05-5 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING MAIN AND AUXILIARY MACHINERY AND RELATED SYSTEMS Main Propulsion Machinery § 58.05-5 Astern power. (a) All vessels shall have sufficient...
46 CFR 58.05-5 - Astern power.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Astern power. 58.05-5 Section 58.05-5 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING MAIN AND AUXILIARY MACHINERY AND RELATED SYSTEMS Main Propulsion Machinery § 58.05-5 Astern power. (a) All vessels shall have sufficient...
Space Station Power Generation in Support of the Beta Gimbal Anomaly Resolution
NASA Technical Reports Server (NTRS)
Delleur, Ann M.; Propp, Timothy W.
2003-01-01
The International Space Station (ISS) is the largest and most complex spacecraft ever assembled and operated in orbit. The first U.S. photovoltaic (PV) module, containing two solar arrays, was launched, installed, and activated in early December 2000. After the first week of continuously rotating the U.S. solar arrays, engineering personnel in the ISS Mission Evaluation Room (MER) observed higher than expected electrical currents on the drive motor in one of the Beta Gimbal Assemblies (BGA), the mechanism used to maneuver a U.S. solar array. The magnitude of the motor currents continued to increase over time on both BGA's, creating concerns about the ability of the gimbals to continue pointing the solar arrays towards the sun, a function critical for continued assembly of the ISS. A number of engineering disciplines convened in May 2001 to address this on-orbit hardware anomaly. This paper reviews the ISS electrical power system (EPS) analyses performed to develop viable operational workarounds that would minimize BGA use while maintaining sufficient solar array power to continue assembly of the ISS. Additionally, EPS analyses performed in support of on-orbit BGA troubleshooting exercises is reviewed. EPS capability analyses were performed using SPACE, a computer code developed by NASA Glenn Research Center (GRC) for the ISS program office.
NASA Astrophysics Data System (ADS)
Luo, Chang; Wang, Jie; Feng, Gang; Xu, Suhui; Wang, Shiqiang
2017-10-01
Deep convolutional neural networks (CNNs) have been widely used to obtain high-level representation in various computer vision tasks. However, for remote scene classification, there are not sufficient images to train a very deep CNN from scratch. From two viewpoints of generalization power, we propose two promising kinds of deep CNNs for remote scenes and try to find whether deep CNNs need to be deep for remote scene classification. First, we transfer successful pretrained deep CNNs to remote scenes based on the theory that depth of CNNs brings the generalization power by learning available hypothesis for finite data samples. Second, according to the opposite viewpoint that generalization power of deep CNNs comes from massive memorization and shallow CNNs with enough neural nodes have perfect finite sample expressivity, we design a lightweight deep CNN (LDCNN) for remote scene classification. With five well-known pretrained deep CNNs, experimental results on two independent remote-sensing datasets demonstrate that transferred deep CNNs can achieve state-of-the-art results in an unsupervised setting. However, because of its shallow architecture, LDCNN cannot obtain satisfactory performance, regardless of whether in an unsupervised, semisupervised, or supervised setting. CNNs really need depth to obtain general features for remote scenes. This paper also provides baseline for applying deep CNNs to other remote sensing tasks.
The War Powers Resolution: Intent Implementation and Impact
1993-04-01
separation of powers , the authority as Commander-in-Chief is also specifically delegated to the President. The clear intent of the founders of our nation was... separation of powers spelled out - in sufficient detail they thought -- so that there would be little or no ambiguity over who exercised what powers...OF CONFLICT SEPARATION OF POWERS As discussed in the opening paragraphs of this paper, the founding fathers intentionally delegated 20 separate powers
NASA Astrophysics Data System (ADS)
Alves, A. F.; Pina, D. R.; Bacchim Neto, F. A.; Ribeiro, S. M.; Miranda, J. R. A.
2014-03-01
Our main purpose in this study was to quantify biological tissue in computed tomography (CT) examinations with the aim of developing a skull and a chest patient equivalent phantom (PEP), both specific to infants, aged between 1 and 5 years old. This type of phantom is widely used in the development of optimization procedures for radiographic techniques, especially in computed radiography (CR) systems. In order to classify and quantify the biological tissue, we used a computational algorithm developed in Matlab ®. The algorithm performed a histogram of each CT slice followed by a Gaussian fitting of each tissue type. The algorithm determined the mean thickness for the biological tissues (bone, soft, fat, and lung) and also converted them into the corresponding thicknesses of the simulator material (aluminum, PMMA, and air). We retrospectively analyzed 148 CT examinations of infant patients, 56 for skull exams and 92 were for chest. The results provided sufficient data to construct a phantom to simulate the infant chest and skull in the posterior-anterior or anterior-posterior (PA/AP) view. Both patient equivalent phantoms developed in this study can be used to assess physical variables such as noise power spectrum (NPS) and signal to noise ratio (SNR) or perform dosimetric control specific to pediatric protocols.
Smartphones as image processing systems for prosthetic vision.
Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J
2013-01-01
The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.
Carmichael, Clare; Carmichael, Patrick
2014-01-01
This paper highlights aspects related to current research and thinking about ethical issues in relation to Brain Computer Interface (BCI) and Brain-Neuronal Computer Interfaces (BNCI) research through the experience of one particular project, BrainAble, which is exploring and developing the potential of these technologies to enable people with complex disabilities to control computers. It describes how ethical practice has been developed both within the multidisciplinary research team and with participants. The paper presents findings in which participants shared their views of the project prototypes, of the potential of BCI/BNCI systems as an assistive technology, and of their other possible applications. This draws attention to the importance of ethical practice in projects where high expectations of technologies, and representations of "ideal types" of disabled users may reinforce stereotypes or drown out participant "voices". Ethical frameworks for research and development in emergent areas such as BCI/BNCI systems should be based on broad notions of a "duty of care" while being sufficiently flexible that researchers can adapt project procedures according to participant needs. They need to be frequently revisited, not only in the light of experience, but also to ensure they reflect new research findings and ever more complex and powerful technologies.
The space station tethered elevator system
NASA Technical Reports Server (NTRS)
Anderson, Loren A.
1989-01-01
The optimized conceptual engineering design of a space station tethered elevator is presented. The elevator is an unmanned mobile structure which operates on a ten kilometer tether spanning the distance between the Space Station and a tethered platform. Elevator capabilities include providing access to residual gravity levels, remote servicing, and transportation to any point along a tether. The potential uses, parameters, and evolution of the spacecraft design are discussed. Engineering development of the tethered elevator is the result of work conducted in the following areas: structural configurations; robotics, drive mechanisms; and power generation and transmission systems. The structural configuration of the elevator is presented. The structure supports, houses, and protects all systems on board the elevator. The implementation of robotics on board the elevator is discussed. Elevator robotics allow for the deployment, retrieval, and manipulation of tethered objects. Robotic manipulators also aid in hooking the elevator on a tether. Critical to the operation of the tethered elevator is the design of its drive mechanisms, which are discussed. Two drivers, located internal to the elevator, propel the vehicle along a tether. These modular components consist of endless toothed belts, shunt-wound motors, regenerative power braking, and computer controlled linear actuators. The designs of self-sufficient power generation and transmission systems are reviewed. Thorough research indicates all components of the elevator will operate under power provided by fuel cells. The fuel cell systems will power the vehicle at seven kilowatts continuously and twelve kilowatts maximally. A set of secondary fuel cells provides redundancy in the unlikely event of a primary system failure. Power storage exists in the form of Nickel-Hydrogen batteries capable of powering the elevator under maximum loads.
Integrated Surface Power Strategy for Mars
NASA Technical Reports Server (NTRS)
Rucker, Michelle
2015-01-01
A National Aeronautics and Space Administration (NASA) study team evaluated surface power needs for a conceptual crewed 500-day Mars mission. This study had four goals: 1. Determine estimated surface power needed to support the reference mission; 2. Explore alternatives to minimize landed power system mass; 3. Explore alternatives to minimize Mars Lander power self-sufficiency burden; and 4. Explore alternatives to minimize power system handling and surface transportation mass. The study team concluded that Mars Ascent Vehicle (MAV) oxygen propellant production drives the overall surface power needed for the reference mission. Switching to multiple, small Kilopower fission systems can potentially save four to eight metric tons of landed mass, as compared to a single, large Fission Surface Power (FSP) concept. Breaking the power system up into modular packages creates new operational opportunities, with benefits ranging from reduced lander self-sufficiency for power, to extending the exploration distance from a single landing site. Although a large FSP trades well for operational complexity, a modular approach potentially allows Program Managers more flexibility to absorb late mission changes with less schedule or mass risk, better supports small precursor missions, and allows a program to slowly build up mission capability over time. A number of Kilopower disadvantages-and mitigation strategies-were also explored.
HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation
NASA Technical Reports Server (NTRS)
Sterling, Thomas; Bergman, Larry
2000-01-01
Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.
Computer Power: Part 1: Distribution of Power (and Communications).
ERIC Educational Resources Information Center
Price, Bennett J.
1988-01-01
Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
Levanon, Yafa; Lerman, Yehuda; Gefen, Amit; Ratzon, Navah Z
2014-01-01
Awkward body posture while typing is associated with musculoskeletal disorders (MSDs). Valid rapid assessment of computer workers' body posture is essential for the prevention of MSD among this large population. This study aimed to examine the validity of the modified rapid upper limb assessment (mRULA) which adjusted the rapid upper limb assessment (RULA) for computer workers. Moreover, this study examines whether one observation during a working day is sufficient or more observations are needed. A total of 29 right-handed computer workers were recruited. RULA and mRULA were conducted. The observations were then repeated six times at one-hour intervals. A significant moderate correlation (r = 0.6 and r = 0.7 for mouse and keyboard, respectively) was found between the assessments. No significant differences were found between one observation and six observations per working day. The mRULA was found to be valid for the assessment of computer workers, and one observation was sufficient to assess the work-related risk factor.
ERIC Educational Resources Information Center
Byers, Joseph W.
1991-01-01
The most useful feature of laptop computers is portability, as one elementary school principal notes. IBM and Apple are not leaders in laptop technology. Tandy and Toshiba market relatively inexpensive models offering durability, reliable software, and sufficient memory space. (MLH)
Fundamentals of computer graphics for artists and designers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riley, B.A.
1986-01-01
This tutorial provides introductory information about computer graphics slanted towards novice users from artist/designer backgrounds. The goal is to describe the applications and terminology sufficiently to provide a base of knowledge for discussions with vendors.
Experimental Validation of a Closed Brayton Cycle System Transient Simulation
NASA Technical Reports Server (NTRS)
Johnson, Paul K.; Hervol, David S.
2006-01-01
The Brayton Power Conversion Unit (BPCU) is a closed cycle system with an inert gas working fluid. It is located in Vacuum Facility 6 at NASA Glenn Research Center. Was used in previous solar dynamic technology efforts (SDGTD). Modified to its present configuration by replacing the solar receiver with an electrical resistance heater. The first closed-Brayton-cycle to be coupled with an ion propulsion system. Used to examine mechanical dynamic characteristics and responses. The focus of this work was the validation of a computer model of the BPCU. Model was built using the Closed Cycle System Simulation (CCSS) design and analysis tool. Test conditions were then duplicated in CCSS. Various steady-state points. Transients involving changes in shaft rotational speed and heat input. Testing to date has shown that the BPCU is able to generate meaningful, repeatable data that can be used for computer model validation. Results generated by CCSS demonstrated that the model sufficiently reproduced the thermal transients exhibited by the BPCU system. CCSS was also used to match BPCU steady-state operating points. Cycle temperatures were within 4.1% of the data (most were within 1%). Cycle pressures were all within 3.2%. Error in alternator power (as much as 13.5%) was attributed to uncertainties in the compressor and turbine maps and alternator and bearing loss models. The acquired understanding of the BPCU behavior gives useful insight for improvements to be made to the CCSS model as well as ideas for future testing and possible system modifications.
ASR4: A computer code for fitting and processing 4-gage anelastic strain recovery data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warpinski, N.R.
A computer code for analyzing four-gage Anelastic Strain Recovery (ASR) data has been modified for use on a personal computer. This code fits the viscoelastic model of Warpinski and Teufel to measured ASR data, calculates the stress orientation directly, and computes stress magnitudes if sufficient input data are available. The code also calculates the stress orientation using strain-rosette equations, and its calculates stress magnitudes using Blanton's approach, assuming sufficient input data are available. The program is written in FORTRAN, compiled with Ryan-McFarland Version 2.4. Graphics use PLOT88 software by Plotworks, Inc., but the graphics software must be obtained by themore » user because of licensing restrictions. A version without graphics can also be run. This code is available through the National Energy Software Center (NESC), operated by Argonne National Laboratory. 5 refs., 3 figs.« less
An Integrated Circuit for Radio Astronomy Correlators Supporting Large Arrays of Antennas
NASA Technical Reports Server (NTRS)
D'Addario, Larry R.; Wang, Douglas
2016-01-01
Radio telescopes that employ arrays of many antennas are in operation, and ever larger ones are being designed and proposed. Signals from the antennas are combined by cross-correlation. While the cost of most components of the telescope is proportional to the number of antennas N, the cost and power consumption of cross-correlationare proportional to N2 and dominate at sufficiently large N. Here we report the design of an integrated circuit (IC) that performs digital cross-correlations for arbitrarily many antennas in a power-efficient way. It uses an intrinsically low-power architecture in which the movement of data between devices is minimized. In a large system, each IC performs correlations for all pairs of antennas but for a portion of the telescope's bandwidth (the so-called "FX" structure). In our design, the correlations are performed in an array of 4096 complex multiply-accumulate (CMAC) units. This is sufficient to perform all correlations in parallel for 64 signals (N=32 antennas with 2 opposite-polarization signals per antenna). When N is larger, the input data are buffered in an on-chipmemory and the CMACs are re-used as many times as needed to compute all correlations. The design has been synthesized and simulated so as to obtain accurate estimates of the IC's size and power consumption. It isintended for fabrication in a 32 nm silicon-on-insulator process, where it will require less than 12mm2 of silicon area and achieve an energy efficiency of 1.76 to 3.3 pJ per CMAC operation, depending on the number of antennas. Operation has been analyzed in detail up to N = 4096. The system-level energy efficiency, including board-levelI/O, power supplies, and controls, is expected to be 5 to 7 pJ per CMAC operation. Existing correlators for the JVLA (N = 32) and ALMA (N = 64) telescopes achieve about 5000 pJ and 1000 pJ respectively usingapplication-specific ICs in older technologies. To our knowledge, the largest-N existing correlator is LEDA atN = 256; it uses GPUs built in 28 nm technology and achieves about 1000 pJ. Correlators being designed for the SKA telescopes (N = 128 and N = 512) using FPGAs in 16nm technology are predicted to achieve about 100 pJ.
1982-12-01
Sequence dj Estimate of the Desired Signal DEL Sampling Time Interval DS Direct Sequence c Sufficient Statistic E/T Signal Power Erfc Complimentary Error...Namely, a white Gaussian noise (WGN) generator was added. Also, a statistical subroutine was added in order to assess performance improvement at the...reference code and then passed through a correlation detector whose output is the sufficient 1 statistic , e . Using a threshold device and the sufficient
Wearable computer for mobile augmented-reality-based controlling of an intelligent robot
NASA Astrophysics Data System (ADS)
Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino
2000-10-01
An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.
The Power of Instructions: Proactive Configuration of Stimulus-Response Translation
ERIC Educational Resources Information Center
Meiran, Nachshon; Pereg, Maayan; Kessler, Yoav; Cole, Michael W.; Braver, Todd S.
2015-01-01
Humans are characterized by an especially highly developed ability to use instructions to prepare toward upcoming events; yet, it is unclear just how powerful instructions can be. Although prior work provides evidence that instructions can be sufficiently powerful to proactively program working memory to execute stimulus-response (S-R)…
18 CFR 12.43 - Power and communication lines and gas pipelines.
Code of Federal Regulations, 2013 CFR
2013-04-01
... reasonable specifications that may be provided by the Regional Engineer, to ensure that any power or... waters must be at least sufficient to conform to any applicable requirements of the National Electrical... Engineer may require a licensee or applicant to provide signs at or near power or communication lines to...
18 CFR 12.43 - Power and communication lines and gas pipelines.
Code of Federal Regulations, 2012 CFR
2012-04-01
... reasonable specifications that may be provided by the Regional Engineer, to ensure that any power or... waters must be at least sufficient to conform to any applicable requirements of the National Electrical... Engineer may require a licensee or applicant to provide signs at or near power or communication lines to...
18 CFR 12.43 - Power and communication lines and gas pipelines.
Code of Federal Regulations, 2014 CFR
2014-04-01
... reasonable specifications that may be provided by the Regional Engineer, to ensure that any power or... waters must be at least sufficient to conform to any applicable requirements of the National Electrical... Engineer may require a licensee or applicant to provide signs at or near power or communication lines to...
18 CFR 12.43 - Power and communication lines and gas pipelines.
Code of Federal Regulations, 2011 CFR
2011-04-01
... reasonable specifications that may be provided by the Regional Engineer, to ensure that any power or... waters must be at least sufficient to conform to any applicable requirements of the National Electrical... Engineer may require a licensee or applicant to provide signs at or near power or communication lines to...
18 CFR 12.43 - Power and communication lines and gas pipelines.
Code of Federal Regulations, 2010 CFR
2010-04-01
... reasonable specifications that may be provided by the Regional Engineer, to ensure that any power or... waters must be at least sufficient to conform to any applicable requirements of the National Electrical... Engineer may require a licensee or applicant to provide signs at or near power or communication lines to...
48 CFR 28.101-3 - Authority of an attorney-in-fact for a bid bond.
Code of Federal Regulations, 2010 CFR
2010-10-01
... responsiveness; and (2) Treat questions regarding the authenticity and enforceability of the power of attorney at..., or a photocopy or facsimile of an original, power of attorney is sufficient evidence of such... and dates on the power of attorney shall be considered original signatures, seals and dates, without...
7 CFR 1717.306 - RUS required rates.
Code of Federal Regulations, 2011 CFR
2011-01-01
...-emption in Rate Making in Connection With Power Supply Borrowers § 1717.306 RUS required rates. (a) Upon... of RUS that are sufficient to satisfy the requirements of the RUS wholesale power contract and other... with terms of the RUS wholesale power contract and other RUS documents in a timely fashion, RUS may...
7 CFR 1717.306 - RUS required rates.
Code of Federal Regulations, 2010 CFR
2010-01-01
...-emption in Rate Making in Connection With Power Supply Borrowers § 1717.306 RUS required rates. (a) Upon... of RUS that are sufficient to satisfy the requirements of the RUS wholesale power contract and other... with terms of the RUS wholesale power contract and other RUS documents in a timely fashion, RUS may...
Telecommunications equipment power supply in the Arctic by means of solar panels
NASA Astrophysics Data System (ADS)
Terekhin, Vladimir; Lagunov, Alexey
2016-09-01
Development of the Arctic region is one of the priorities in the Russian Federation. Amongst other things, a reliable telecommunications infrastructure in the Arctic is required. Petrol and diesel generators are traditionally employed but their use has considerable environmental impact. Solar panels can be used as an alternative power source. The electricity generated will be sufficient to supply small-sized telecommunications equipment with total the power of over 80 watts. An installation consisting of the solar modules, a charge controller, batteries, an inverter and load was designed. Tests were conducted at Cape Desire of the Novaya Zemlya (island). The solar panels provided in excess of 80 W from 7 a.m. to 11 p.m. The batteries charge during this time was sufficient to provide the power supply for the communication equipment during the night, from 11 p.m. to 7 a.m. The maximum value of 638 W of the power generation was observed at 3 p.m. The minimum value of 46 W was at 4 a.m. The solar modules thus can be used during the polar day to power the telecommunications equipment.
NASA Astrophysics Data System (ADS)
Miara, A.; Macknick, J.; Vorosmarty, C. J.; Corsi, F.; Fekete, B. M.; Newmark, R. L.; Tidwell, V. C.; Cohen, S. M.
2016-12-01
Thermoelectric plants supply 85% of electricity generation in the United States. Under a warming climate, the performance of these power plants may be reduced, as thermoelectric generation is dependent upon cool ambient temperatures and sufficient water supplies at adequate temperatures. In this study, we assess the vulnerability and reliability of 1,100 operational power plants (2015) across the contiguous United States under a comprehensive set of climate scenarios (five Global Circulation Models each with four Representative Concentration Pathways). We model individual power plant capacities using the Thermoelectric Power and Thermal Pollution model (TP2M) coupled with the Water Balance Model (WBM) at a daily temporal resolution and 5x5 km spatial resolution. Together, these models calculate power plant capacity losses that account for geophysical constraints and river network dynamics. Potential losses at the single-plant level are put into a regional energy security context by assessing the collective system-level reliability at the North-American Electricity Reliability Corporation (NERC) regions. Results show that the thermoelectric sector at the national level has low vulnerability under the contemporary climate and that system-level reliability in terms of available thermoelectric resources relative to thermoelectric demand is sufficient. Under future climates scenarios, changes in water availability and warm ambient temperatures lead to constraints on operational capacity and increased vulnerability at individual power plant sites across all regions in the United States. However, there is a strong disparity in regional vulnerability trends and magnitudes that arise from each region's climate, hydrology and technology mix. Despite increases in vulnerabilities at the individual power plant level, regional energy systems may still be reliable (with no system failures) due to sufficient back-up reserve capacities.
Dynamical AdS strings across horizons
Ishii, Takaaki; Murata, Keiju
2016-03-01
We examine the nonlinear classical dynamics of a fundamental string in anti-deSitter spacetime. The string is dual to the flux tube between an external quark-antiquark pair in $N = 4$ super Yang-Mills theory. We perturb the string by shaking the endpoints and compute its time evolution numerically. We find that with sufficiently strong perturbations the string continues extending and plunges into the Poincare´ horizon. In the evolution, effective horizons are also dynamically created on the string worldsheet. The quark and antiquark are thus causally disconnected, and the string transitions to two straight strings. The forces acting on the endpoints vanishmore » with a power law whose slope depends on the perturbations. Lastly, the condition for this transition to occur is that energy injection exceeds the static energy between the quark-antiquark pair.« less
A macro-micro robot for precise force applications
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Wang, Yulun
1993-01-01
This paper describes an 8 degree-of-freedom macro-micro robot capable of performing tasks which require accurate force control. Applications such as polishing, finishing, grinding, deburring, and cleaning are a few examples of tasks which need this capability. Currently these tasks are either performed manually or with dedicated machinery because of the lack of a flexible and cost effective tool, such as a programmable force-controlled robot. The basic design and control of the macro-micro robot is described in this paper. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system.
Delensing CMB polarization with external datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kendrick M.; Hanson, Duncan; LoVerde, Marilena
2012-06-01
One of the primary scientific targets of current and future CMB polarization experiments is the search for a stochastic background of gravity waves in the early universe. As instrumental sensitivity improves, the limiting factor will eventually be B-mode power generated by gravitational lensing, which can be removed through use of so-called ''delensing'' algorithms. We forecast prospects for delensing using lensing maps which are obtained externally to CMB polarization: either from large-scale structure observations, or from high-resolution maps of CMB temperature. We conclude that the forecasts in either case are not encouraging, and that significantly delensing large-scale CMB polarization requires high-resolutionmore » polarization maps with sufficient sensitivity to measure the lensing B-mode. We also present a simple formalism for including delensing in CMB forecasts which is computationally fast and agrees well with Monte Carlos.« less
NASA Astrophysics Data System (ADS)
Kandouci, Chahinaz; Djebbari, Ali
2018-04-01
A new family of two-dimensional optical hybrid code which employs zero cross-correlation (ZCC) codes, constructed by the balanced incomplete block design BIBD, as both time-spreading and wavelength hopping patterns are used in this paper. The obtained codes have both off-peak autocorrelation and cross-correlation values respectively equal to zero and unity. The work in this paper is a computer experiment performed using Optisystem 9.0 software program as a simulator to determine the wavelength hopping/time spreading (WH/TS) OCDMA system performances limitations. Five system parameters were considered in this work: the optical fiber length (transmission distance), the bitrate, the chip spacing and the transmitted power. This paper shows for what sufficient system performance parameters (BER≤10-9, Q≥6) the system can stand for.
System Design Techniques for Reducing the Power Requirements of Advanced life Support Systems
NASA Technical Reports Server (NTRS)
Finn, Cory; Levri, Julie; Pawlowski, Chris; Crawford, Sekou; Luna, Bernadette (Technical Monitor)
2000-01-01
The high power requirement associated with overall operation of regenerative life support systems is a critical Z:p technological challenge. Optimization of individual processors alone will not be sufficient to produce an optimized system. System studies must be used in order to improve the overall efficiency of life support systems. Current research efforts at NASA Ames Research Center are aimed at developing approaches for reducing system power and energy usage in advanced life support systems. System energy integration and energy reuse techniques are being applied to advanced life support, in addition to advanced control methods for efficient distribution of power and thermal resources. An overview of current results of this work will be presented. The development of integrated system designs that reuse waste heat from sources such as crop lighting and solid waste processing systems will reduce overall power and cooling requirements. Using an energy integration technique known as Pinch analysis, system heat exchange designs are being developed that match hot and cold streams according to specific design principles. For various designs, the potential savings for power, heating and cooling are being identified and quantified. The use of state-of-the-art control methods for distribution of resources, such as system cooling water or electrical power, will also reduce overall power and cooling requirements. Control algorithms are being developed which dynamically adjust the use of system resources by the various subsystems and components in order to achieve an overall goal, such as smoothing of power usage and/or heat rejection profiles, while maintaining adequate reserves of food, water, oxygen, and other consumables, and preventing excessive build-up of waste materials. Reductions in the peak loading of the power and thermal systems will lead to lower overall requirements. Computer simulation models are being used to test various control system designs.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
..., ``Configuration Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants... Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory..., Reviews, and Audits for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This...
Observer-Based Discrete-Time Nonnegative Edge Synchronization of Networked Systems.
Su, Housheng; Wu, Han; Chen, Xia
2017-10-01
This paper studies the multi-input and multi-output discrete-time nonnegative edge synchronization of networked systems based on neighbors' output information. The communication relationship among the edges of networked systems is modeled by well-known line graph. Two observer-based edge synchronization algorithms are designed, for which some necessary and sufficient synchronization conditions are derived. Moreover, some computable sufficient synchronization conditions are obtained, in which the feedback matrix and the observer matrix are computed by solving the linear programming problems. We finally design several simulation examples to demonstrate the validity of the given nonnegative edge synchronization algorithms.
Laser beamed power: Satellite demonstration applications
NASA Technical Reports Server (NTRS)
Landis, Geoffrey A.; Westerlund, Larry H.
1992-01-01
It is possible to use a ground-based laser to beam light to the solar arrays of orbiting satellites, to a level sufficient to provide all or some of the operating power required. Near-term applications of this technology for providing supplemental power to existing satellites are discussed. Two missions with significant commercial pay-off are supplementing solar power for radiation-degraded arrays and providing satellite power during eclipse for satellites with failed batteries.
Energy-efficient lighting system for television
Cawthorne, Duane C.
1987-07-21
A light control system for a television camera comprises an artificial light control system which is cooperative with an iris control system. This artificial light control system adjusts the power to lamps illuminating the camera viewing area to provide only sufficient artificial illumination necessary to provide a sufficient video signal when the camera iris is substantially open.
RERTR-7 Irradiation Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. M. Perez; M. A. Lillo; G. S. Chang
2011-12-01
The Reduced Enrichment for Research and Test Reactor (RERTR) experiment RERTR-7A, was designed to test several modified fuel designs to target fission densities representative of a peak low enriched uranium (LEU) burnup in excess of 90% U-235 at peak experiment power sufficient to generate a peak surface heat flux of approximately 300 W/cm2. The RERTR-7B experiment was designed as a high power test of 'second generation' dispersion fuels at peak experiment power sufficient to generate a surface heat flux on the order of 230 W/cm2.1 The following report summarizes the life of the RERTR-7A and RERTR-7B experiments through end ofmore » irradiation, including as-run neutronic analyses, thermal analyses and hydraulic testing results.« less
Towards a Sufficient Theory of Transition in Cognitive Development.
ERIC Educational Resources Information Center
Wallace, J. G.
The work reported aims at the construction of a sufficient theory of transition in cognitive development. The method of theory construction employed is computer simulation of cognitive process. The core of the model of transition presented comprises self-modification processes that, as a result of continuously monitoring an exhaustive record of…
Christensen, Signe; Horowitz, Scott; Bardwell, James C.A.; Olsen, Johan G.; Willemoës, Martin; Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper; Hamelryck, Thomas; Winther, Jakob R.
2017-01-01
Despite the development of powerful computational tools, the full-sequence design of proteins still remains a challenging task. To investigate the limits and capabilities of computational tools, we conducted a study of the ability of the program Rosetta to predict sequences that recreate the authentic fold of thioredoxin. Focusing on the influence of conformational details in the template structures, we based our study on 8 experimentally determined template structures and generated 120 designs from each. For experimental evaluation, we chose six sequences from each of the eight templates by objective criteria. The 48 selected sequences were evaluated based on their progressive ability to (1) produce soluble protein in Escherichia coli and (2) yield stable monomeric protein, and (3) on the ability of the stable, soluble proteins to adopt the target fold. Of the 48 designs, we were able to synthesize 32, 20 of which resulted in soluble protein. Of these, only two were sufficiently stable to be purified. An X-ray crystal structure was solved for one of the designs, revealing a close resemblance to the target structure. We found a significant difference among the eight template structures to realize the above three criteria despite their high structural similarity. Thus, in order to improve the success rate of computational full-sequence design methods, we recommend that multiple template structures are used. Furthermore, this study shows that special care should be taken when optimizing the geometry of a structure prior to computational design when using a method that is based on rigid conformations. PMID:27659562
Johansson, Kristoffer E; Tidemand Johansen, Nicolai; Christensen, Signe; Horowitz, Scott; Bardwell, James C A; Olsen, Johan G; Willemoës, Martin; Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper; Hamelryck, Thomas; Winther, Jakob R
2016-10-23
Despite the development of powerful computational tools, the full-sequence design of proteins still remains a challenging task. To investigate the limits and capabilities of computational tools, we conducted a study of the ability of the program Rosetta to predict sequences that recreate the authentic fold of thioredoxin. Focusing on the influence of conformational details in the template structures, we based our study on 8 experimentally determined template structures and generated 120 designs from each. For experimental evaluation, we chose six sequences from each of the eight templates by objective criteria. The 48 selected sequences were evaluated based on their progressive ability to (1) produce soluble protein in Escherichia coli and (2) yield stable monomeric protein, and (3) on the ability of the stable, soluble proteins to adopt the target fold. Of the 48 designs, we were able to synthesize 32, 20 of which resulted in soluble protein. Of these, only two were sufficiently stable to be purified. An X-ray crystal structure was solved for one of the designs, revealing a close resemblance to the target structure. We found a significant difference among the eight template structures to realize the above three criteria despite their high structural similarity. Thus, in order to improve the success rate of computational full-sequence design methods, we recommend that multiple template structures are used. Furthermore, this study shows that special care should be taken when optimizing the geometry of a structure prior to computational design when using a method that is based on rigid conformations. Copyright © 2016 Elsevier Ltd. All rights reserved.
2005-06-01
34> <rdfs:subClassOf rdf:resource="#Condition"/> <rdfs:label>Economic Self -Sufficiency Class</rdfs:label> <cnd:categoryCode>C</cnd:categoryCode...cnd:index>3.3.4.1</cnd:index> <cnd:title>Economic Self -Sufficiency</cnd:title> <cnd:definition>The ability of a nation to...34#International_Economic_Position"/> <cnd:subCategory rdf:resource="# Self -Sufficiency_In_Food"/> <cnd:subCategory rdf:resource="# Self
Bounds on the power of proofs and advice in general physical theories.
Lee, Ciarán M; Hoban, Matty J
2016-06-01
Quantum theory presents us with the tools for computational and communication advantages over classical theory. One approach to uncovering the source of these advantages is to determine how computation and communication power vary as quantum theory is replaced by other operationally defined theories from a broad framework of such theories. Such investigations may reveal some of the key physical features required for powerful computation and communication. In this paper, we investigate how simple physical principles bound the power of two different computational paradigms which combine computation and communication in a non-trivial fashion: computation with advice and interactive proof systems. We show that the existence of non-trivial dynamics in a theory implies a bound on the power of computation with advice. Moreover, we provide an explicit example of a theory with no non-trivial dynamics in which the power of computation with advice is unbounded. Finally, we show that the power of simple interactive proof systems in theories where local measurements suffice for tomography is non-trivially bounded. This result provides a proof that [Formula: see text] is contained in [Formula: see text], which does not make use of any uniquely quantum structure-such as the fact that observables correspond to self-adjoint operators-and thus may be of independent interest.
Power transfer for rotating medical machine.
Sofia, A; Tavilla, A C; Gardenghi, R; Nicolis, D; Stefanini, I
2016-08-01
Very often biological tissues need to be treated inside of a biomedical centrifuge even during the centrifugation step without process interruption. In this paper an advantageous energy transfer method capable of providing sufficient electric power for the rotating and active part is presented.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Center for Space Power, Texas A and M University
NASA Astrophysics Data System (ADS)
Jones, Ken
Johnson Controls is a 106 year old company employing 42,000 people worldwide with $4.7 billion annual sales. Though we are new to the aerospace industry we are a world leader in automobile battery manufacturing, automotive seating, plastic bottling, and facilities environment controls. The battery division produces over 24,000,000 batteries annually under private label for the new car manufacturers and the replacement market. We are entering the aerospace market with the nickel hydrogen battery with the help of NASA's Center for Space Power at Texas A&M. Unlike traditional nickel hydrogen battery manufacturers, we are reaching beyond the space applications to the higher volume markets of aircraft starting and utility load leveling. Though space applications alone will not provide sufficient volume to support the economies of scale and opportunities for statistical process control, these additional terrestrial applications will. For example, nickel hydrogen batteries do not have the environmental problems of nickel cadmium or lead acid and may someday start your car or power your electric vehicle. However you envision the future, keep in mind that no manufacturer moves into a large volume market without fine tuning their process. The Center for Space Power at Texas A&M is providing indepth technical analysis of all of the materials and fabricated parts of our battery as well as thermal and mechanical design computer modeling. Several examples of what we are doing with nickel hydrogen chemistry to lead to these production efficiencies are presented.
Center for Space Power, Texas A and M University
NASA Technical Reports Server (NTRS)
Jones, Ken
1991-01-01
Johnson Controls is a 106 year old company employing 42,000 people worldwide with $4.7 billion annual sales. Though we are new to the aerospace industry we are a world leader in automobile battery manufacturing, automotive seating, plastic bottling, and facilities environment controls. The battery division produces over 24,000,000 batteries annually under private label for the new car manufacturers and the replacement market. We are entering the aerospace market with the nickel hydrogen battery with the help of NASA's Center for Space Power at Texas A&M. Unlike traditional nickel hydrogen battery manufacturers, we are reaching beyond the space applications to the higher volume markets of aircraft starting and utility load leveling. Though space applications alone will not provide sufficient volume to support the economies of scale and opportunities for statistical process control, these additional terrestrial applications will. For example, nickel hydrogen batteries do not have the environmental problems of nickel cadmium or lead acid and may someday start your car or power your electric vehicle. However you envision the future, keep in mind that no manufacturer moves into a large volume market without fine tuning their process. The Center for Space Power at Texas A&M is providing indepth technical analysis of all of the materials and fabricated parts of our battery as well as thermal and mechanical design computer modeling. Several examples of what we are doing with nickel hydrogen chemistry to lead to these production efficiencies are presented.
Investigations on the potential of a low power diode pumped Er:YAG laser system for oral surgery
NASA Astrophysics Data System (ADS)
Stock, Karl; Wurm, Holger; Hausladen, Florian; Wagner, Sophia; Hibst, Raimund
2015-02-01
Flash lamp pumped Er:YAG-lasers are used in clinical practice for dental applications successfully. As an alternative, several diode pumped Er:YAG laser systems (Pantec Engineering AG) become available, with mean laser power of 2W, 15W, and 30W. The aim of the presented study is to investigate the potential of the 2W Er:YAG laser system for oral surgery. At first an appropriate experimental set-up was realized with a beam delivery and both, a focusing unit for non-contact tissue cutting and a fiber tip for tissue cutting in contact mode. In order to produce reproducible cuts, the samples (porcine gingiva) were moved by a computer controlled translation stage. On the fresh samples cutting depth and quality were determined by light microscopy. Afterwards histological sections were prepared and microscopically analyzed regarding cutting depth and thermal damage zone. The experiments show that low laser power ≤ 2W is sufficient to perform efficient oral soft tissue cutting with cut depth up to 2mm (sample movement 2mm/s). The width of the thermal damage zone can be controlled by the irradiation parameters within a range of about 50μm to 110μm. In general, thermal injury is more pronounced using fiber tips in contact mode compared to the focused laser beam. In conclusion the results reveal that even the low power diode pumped Er:YAG laser is an appropriate tool for oral surgery.
Akoglu, Haldun; Akoglu, Ebru Unal; Evman, Serdar; Akoglu, Tayfun; Denizbasi, Arzu; Guneysel, Ozlem; Onur, Ozge; Onur, Ender
2012-10-01
Small pneumothoraces (PXs), which are not initially recognized with a chest x-ray film and diagnosed by a thoracic computed tomography (CT), are described as occult PX (OCPX). The objective of this study was to evaluate cervival spine (C-spine) and abdominal CT (ACT) for diagnosing OCPX and overt PX (OVPX). All patients with blunt trauma who presented consecutively to the emergency department during a 26-months period were included. Among all the chest CTs (CCTs) (6,155 patients) conducted during that period, 254 scans were confirmed to have a true PX. The findings in their C-spine CT and ACT were compared with the findings in CCTs. Among these patients, 254 had a diagnosis of PX confirmed with CCT. OCPXs were identified on the chest computed tomographic scan of 128 patients (70.3%), whereas OVPXs were evident in 54 patients (29.7%). Computed tomographic imaging of the C-spine was performed in 74% of patients with OCPX and 66.7% of patients with OVPX trauma. Only 45 (35.2%) cases of OCPX and 42 (77.8%) cases of OVPX were detected by C-spine CT. ACT was performed in almost all patients, and 121 (95.3%) of 127 of these correctly identified an existing OCPX. Sensitivity of C-spine CT and ACT was 35.1% and 96.5%, respectively; specificity was 100% and 100%, respectively. Almost all OCPXs, regardless of intrathoracic location, could be detected by ACT or by combining C-spine and abdominal computed tomographic screening for patients. If the junction of the first and second vertebra is used as the caudad extent, C-spine CT does not have sufficient power to diagnose more than a third of the cases. Diagnostic study, level III.
Takata, Munehisa; Watanabe, Go; Ohtake, Hiroshi; Ushijima, Teruaki; Yamaguchi, Shojiro; Kikuchi, Yujiro; Yamamoto, Yoshitaka
2011-05-01
This study applied a computer-controlled mechanical stapler to vascular end-to-end anastomosis to achieve an automatic aortic anastomosis between the aorta and an artificial graft. In this experimental study, we created a mechanical end-to-end anastomotic model and assessed the strength of the anastomotic site under high pressure. We used a computer-controlled circular stapler named iDrive (Power Medical Interventions, Covidien plc, Dublin, Ireland) for the anastomosis between the porcine aorta and an artificial graft. Then the mechanically stapled group (group A) and the manually sutured group (group B) were compared 10 times, and we assessed the differences at several levels of pressure. To use a mechanical stapler in vascular anastomosis, some special preparations of both the aorta and the artificial graft are necessary to narrow the open end before the procedures. To solve this problem, we established a specially designed purse-string suture for both and finally established end-to-end vascular anastomosis. The anastomosis speed of group A was statistically significantly faster than that of group B (P < .01). The group A anastomotic sites also showed significantly more tolerance to high pressure than those of group B. The computer-controlled stapling device enabled reliable anastomosis of the aorta and the artificial graft. This study showed that mechanical vascular anastomosis with the iDrive was sufficiently strong and safe relative to manual suturing. Copyright © 2011 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.
An entangled-light-emitting diode.
Salter, C L; Stevenson, R M; Farrer, I; Nicoll, C A; Ritchie, D A; Shields, A J
2010-06-03
An optical quantum computer, powerful enough to solve problems so far intractable using conventional digital logic, requires a large number of entangled photons. At present, entangled-light sources are optically driven with lasers, which are impractical for quantum computing owing to the bulk and complexity of the optics required for large-scale applications. Parametric down-conversion is the most widely used source of entangled light, and has been used to implement non-destructive quantum logic gates. However, these sources are Poissonian and probabilistically emit zero or multiple entangled photon pairs in most cycles, fundamentally limiting the success probability of quantum computational operations. These complications can be overcome by using an electrically driven on-demand source of entangled photon pairs, but so far such a source has not been produced. Here we report the realization of an electrically driven source of entangled photon pairs, consisting of a quantum dot embedded in a semiconductor light-emitting diode (LED) structure. We show that the device emits entangled photon pairs under d.c. and a.c. injection, the latter achieving an entanglement fidelity of up to 0.82. Entangled light with such high fidelity is sufficient for application in quantum relays, in core components of quantum computing such as teleportation, and in entanglement swapping. The a.c. operation of the entangled-light-emitting diode (ELED) indicates its potential function as an on-demand source without the need for a complicated laser driving system; consequently, the ELED is at present the best source on which to base future scalable quantum information applications.
Van de Kamer, J B; Lagendijk, J J W
2002-05-21
SAR distributions in a healthy female adult head as a result of a radiating vertical dipole antenna (frequency 915 MHz) representing a hand-held mobile phone have been computed for three different resolutions: 2 mm, 1 mm and 0.4 mm. The extremely high resolution of 0.4 mm was obtained with our quasistatic zooming technique, which is briefly described in this paper. For an effectively transmitted power of 0.25 W, the maximum averaged SAR values in both cubic- and arbitrary-shaped volumes are, respectively, about 1.72 and 2.55 W kg(-1) for 1 g and 0.98 and 1.73 W kg(-1) for 10 g of tissue. These numbers do not vary much (<8%) for the different resolutions, indicating that SAR computations at a resolution of 2 mm are sufficiently accurate to describe the large-scale distribution. However, considering the detailed SAR pattern in the head, large differences may occur if high-resolution computations are performed rather than low-resolution ones. These deviations are caused by both increased modelling accuracy and improved anatomical description in higher resolution simulations. For example, the SAR profile across a boundary between tissues with high dielectric contrast is much more accurately described at higher resolutions. Furthermore, low-resolution dielectric geometries may suffer from loss of anatomical detail, which greatly affects small-scale SAR distributions. Thus. for strongly inhomogeneous regions high-resolution SAR modelling is an absolute necessity.
40 CFR 144.7 - Identification of underground sources of drinking water and exempted aquifers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... lifetime of the GS project, as informed by computational modeling performed pursuant to § 146.84(c)(1), in... exemption is of sufficient size to account for any possible revisions to the computational model during...
40 CFR 144.7 - Identification of underground sources of drinking water and exempted aquifers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... lifetime of the GS project, as informed by computational modeling performed pursuant to § 146.84(c)(1), in... exemption is of sufficient size to account for any possible revisions to the computational model during...
40 CFR 144.7 - Identification of underground sources of drinking water and exempted aquifers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... lifetime of the GS project, as informed by computational modeling performed pursuant to § 146.84(c)(1), in... exemption is of sufficient size to account for any possible revisions to the computational model during...
STATE EXECUTIVE AUTHORITY TO PROMOTE CIVIL RIGHTS, AN ACTION PROGRAM FOR THE 1960'S.
ERIC Educational Resources Information Center
SILARD, JOHN
THE QUESTIONS OF THE GOVERNOR'S POWER REGARDING CIVIL RIGHTS ISSUES WAS DISCUSSED. THROUGH THE "GOVERNOR'S CODE OF FAIR PRACTICES," WHICH BRIEFLY STATED THAT THE STATE'S BASIC POLICY WAS AGAINST DISCRIMINATION, THE GOVERNOR AS WELL AS ALL STATE OFFICIALS HAD SUFFICIENT POWER TO FIGHT DISCRIMINATION. THE OFFICIALS HAD FURTHER POWER WITH…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-02
... high wind conditions pass, wind damage to the plant and surrounding area might preclude a sufficient... Power Station, Units 1, 2 and 3, Dominion Nuclear Connecticut, Inc.; Exemption 1.0 Background Dominion..., DPR-65 and NPF-49, which authorize operation of the Millstone Power Station, Unit Nos. 1, 2 and 3...
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2014-12-01
Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of this work will be a cross-flow turbine actuator line model to be used as an extension to the OpenFOAM computational fluid dynamics (CFD) software framework, which will likely require modifications to commonly-used dynamic stall models, in consideration of the turbines' high angle of attack excursions during normal operation.
NASA Astrophysics Data System (ADS)
Orhan, K.; Mayerle, R.
2016-12-01
A methodology comprising of the estimates of power yield, evaluation of the effects of power extraction on flow conditions, and near-field investigations to deliver wake characteritics, recovery and interactions is described and applied to several straits in Indonesia. Site selection is done with high-resolution, three-dimensional flow models providing sufficient spatiotemporal coverage. Much attention has been given to the meteorological forcing, and conditions at the open sea boundaries to adequately capture the density gradients and flow fields. Model verification using tidal records shows excellent agreement. Sites with adequate depth for the energy conversion using horizontal axis tidal turbines, average kinetic power density greater than 0.5 kW/m2, and surface area larger than 0.5km2 are defined as energy hotspots. Spatial variation of the average extractable electric power is determined, and annual tidal energy resource is estimated for the straits in question. The results showed that the potential for tidal power generation in Indonesia is likely to exceed previous predictions reaching around 4,800MW. To assess the impact of the devices, flexible mesh models with higher resolutions have been developed. Effects on flow conditions, and near-field turbine wakes are resolved in greater detail with triangular horizontal grids. The energy is assumed to be removed uniformly by sub-grid scale arrays of turbines, and calculations are made based on velocities at the hub heights of the devices. An additional drag force resulting in dissipation of the pre-existing kinetic power from %10 to %60 within a flow cross-section is introduced to capture the impacts. It was found that the effect of power extraction on water levels and flow speeds in adjacent areas is not significant. Results show the effectivess of the method to capture wake characteritics and recovery reasonably well with low computational cost.
NASA Astrophysics Data System (ADS)
Ishii, Ayako; Ohnishi, Naofumi; Nagakura, Hiroki; Ito, Hirotaka; Yamada, Shoichi
2017-11-01
We developed a three-dimensional radiative transfer code for an ultra-relativistic background flow-field by using the Monte Carlo (MC) method in the context of gamma-ray burst (GRB) emission. For obtaining reliable simulation results in the coupled computation of MC radiation transport with relativistic hydrodynamics which can reproduce GRB emission, we validated radiative transfer computation in the ultra-relativistic regime and assessed the appropriate simulation conditions. The radiative transfer code was validated through two test calculations: (1) computing in different inertial frames and (2) computing in flow-fields with discontinuous and smeared shock fronts. The simulation results of the angular distribution and spectrum were compared among three different inertial frames and in good agreement with each other. If the time duration for updating the flow-field was sufficiently small to resolve a mean free path of a photon into ten steps, the results were thoroughly converged. The spectrum computed in the flow-field with a discontinuous shock front obeyed a power-law in frequency whose index was positive in the range from 1 to 10 MeV. The number of photons in the high-energy side decreased with the smeared shock front because the photons were less scattered immediately behind the shock wave due to the small electron number density. The large optical depth near the shock front was needed for obtaining high-energy photons through bulk Compton scattering. Even one-dimensional structure of the shock wave could affect the results of radiation transport computation. Although we examined the effect of the shock structure on the emitted spectrum with a large number of cells, it is hard to employ so many computational cells per dimension in multi-dimensional simulations. Therefore, a further investigation with a smaller number of cells is required for obtaining realistic high-energy photons with multi-dimensional computations.
Computer programs for calculating two-dimensional potential flow through deflected nozzles
NASA Technical Reports Server (NTRS)
Hawk, J. D.; Stockman, N. O.
1979-01-01
Computer programs to calculate the incompressible potential flow, corrected for compressibility, in two-dimensional nozzles at arbitrary operating conditions are presented. A statement of the problem to be solved, a description of each of the computer programs, and sufficient documentation, including a test case, to enable a user to run the program are included.
Task-Relevant Sound and User Experience in Computer-Mediated Firefighter Training
ERIC Educational Resources Information Center
Houtkamp, Joske M.; Toet, Alexander; Bos, Frank A.
2012-01-01
The authors added task-relevant sounds to a computer-mediated instructor in-the-loop virtual training for firefighter commanders in an attempt to raise the engagement and arousal of the users. Computer-mediated training for crew commanders should provide a sensory experience that is sufficiently intense to make the training viable and effective.…
Middle School Girls' Envisioned Future in Computing
ERIC Educational Resources Information Center
Friend, Michelle
2015-01-01
Experience is necessary but not sufficient to cause girls to envision a future career in computing. This study investigated the experiences and attitudes of girls who had taken three years of mandatory computer science classes in an all-girls setting in middle school, measured at the end of eighth grade. The one third of participants who were open…
Photovoltaic receivers for laser beamed power in space
NASA Technical Reports Server (NTRS)
Landis, Geoffrey A.
1991-01-01
There has recently been a resurgence of interest in the use of beamed power to support space exploration activities. One of the most promising beamed power concepts uses a laser beam to transmit power to a remote photovoltaic array. Large lasers can be located on cloud-free sites at one or more ground locations and illuminate solar arrays to a level sufficient to provide operating power. Issues involved in providing photovoltaic receivers for such applications are discussed.
Convexity of Energy-Like Functions: Theoretical Results and Applications to Power System Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dvijotham, Krishnamurthy; Low, Steven; Chertkov, Michael
2015-01-12
Power systems are undergoing unprecedented transformations with increased adoption of renewables and distributed generation, as well as the adoption of demand response programs. All of these changes, while making the grid more responsive and potentially more efficient, pose significant challenges for power systems operators. Conventional operational paradigms are no longer sufficient as the power system may no longer have big dispatchable generators with sufficient positive and negative reserves. This increases the need for tools and algorithms that can efficiently predict safe regions of operation of the power system. In this paper, we study energy functions as a tool to designmore » algorithms for various operational problems in power systems. These have a long history in power systems and have been primarily applied to transient stability problems. In this paper, we take a new look at power systems, focusing on an aspect that has previously received little attention: Convexity. We characterize the domain of voltage magnitudes and phases within which the energy function is convex in these variables. We show that this corresponds naturally with standard operational constraints imposed in power systems. We show that power of equations can be solved using this approach, as long as the solution lies within the convexity domain. We outline various desirable properties of solutions in the convexity domain and present simple numerical illustrations supporting our results.« less
Microresonator Frequency Comb Optical Clock
2014-07-22
lithic construction with small size and power consumption. Microcomb development has included frequency control of their spectra [8–11...frequency f eo and amplified to a maximum of 140 mW. The first-order sideband powers are approximately 3 dB lower than the pump, and the piece of highly...resonator offers sufficient peak power for our experiments and is stable and repeatable even for different settings of pump frequency and power
Energy Efficiency Challenges of 5G Small Cell Networks.
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-05-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks.
Energy Efficiency Challenges of 5G Small Cell Networks
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-01-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. PMID:28757670
Energy Use and Power Levels in New Monitors and Personal Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay
2002-07-23
Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can usemore » to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC). Cur rent ENERGY STAR monitor and computer criteria do not specify off or on power, but our results suggest opportunities for saving energy in these modes. Also, significant differences between CRT and LCD technology, and between field-measured and manufacturer-reported power levels reveal the need for standard methods and metrics for measuring and comparing monitor power consumption.« less
Electrolytic plating apparatus for discrete microsized particles
Mayer, Anton
1976-11-30
Method and apparatus are disclosed for electrolytically producing very uniform coatings of a desired material on discrete microsized particles. Agglomeration or bridging of the particles during the deposition process is prevented by imparting a sufficiently random motion to the particles that they are not in contact with a powered cathode for a time sufficient for such to occur.
Beam and Plasma Physics Research
1990-06-01
La di~raDy in high power microwave computations and thi-ory and high energy plasma computations and theory. The HPM computations concentrated on...2.1 REPORT INDEX 7 2.2 TASK AREA 2: HIGH-POWER RF EMISSION AND CHARGED- PARTICLE BEAM PHYSICS COMPUTATION , MODELING AND THEORY 10 2.2.1 Subtask 02-01...Vulnerability of Space Assets 22 2.2.6 Subtask 02-06, Microwave Computer Program Enhancements 22 2.2.7 Subtask 02-07, High-Power Microwave Transvertron Design 23
3-D Electromagnetic field analysis of wireless power transfer system using K computer
NASA Astrophysics Data System (ADS)
Kawase, Yoshihiro; Yamaguchi, Tadashi; Murashita, Masaya; Tsukada, Shota; Ota, Tomohiro; Yamamoto, Takeshi
2018-05-01
We analyze the electromagnetic field of a wireless power transfer system using the 3-D parallel finite element method on K computer, which is a super computer in Japan. It is clarified that the electromagnetic field of the wireless power transfer system can be analyzed in a practical time using the parallel computation on K computer, moreover, the accuracy of the loss calculation becomes better as the mesh division of the shield becomes fine.
Computer program analyzes and monitors electrical power systems (POSIMO)
NASA Technical Reports Server (NTRS)
Jaeger, K.
1972-01-01
Requirements to monitor and/or simulate electric power distribution, power balance, and charge budget are discussed. Computer program to analyze power system and generate set of characteristic power system data is described. Application to status indicators to denote different exclusive conditions is presented.
NASA Technical Reports Server (NTRS)
Schilling, D. L.; Oh, S. J.; Thau, F.
1975-01-01
Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Tamakoshi, Akira; Hanyu, Takahiro
2017-08-01
In this paper, reinitialization-free nonvolatile computer systems are designed and evaluated for energy-harvesting Internet of things (IoT) applications. In energy-harvesting applications, as power supplies generated from renewable power sources cause frequent power failures, data processed need to be backed up when power failures occur. Unless data are safely backed up before power supplies diminish, reinitialization processes are required when power supplies are recovered, which results in low energy efficiencies and slow operations. Using nonvolatile devices in processors and memories can realize a faster backup than a conventional volatile computer system, leading to a higher energy efficiency. To evaluate the energy efficiency upon frequent power failures, typical computer systems including processors and memories are designed using 90 nm CMOS or CMOS/magnetic tunnel junction (MTJ) technologies. Nonvolatile ARM Cortex-M0 processors with 4 kB MRAMs are evaluated using a typical computing benchmark program, Dhrystone, which shows a few order-of-magnitude reductions in energy in comparison with a volatile processor with SRAM.
Balancing computation and communication power in power constrained clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piga, Leonardo; Paul, Indrani; Huang, Wei
Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agentmore » may be configured to reassign the unused power to the active nodes to expedite workload processing.« less
Vibration harvesting in traffic tunnels to power wireless sensor nodes
NASA Astrophysics Data System (ADS)
Wischke, M.; Masur, M.; Kröner, M.; Woias, P.
2011-08-01
Monitoring the traffic and the structural health of traffic tunnels requires numerous sensors. Powering these remote and partially embedded sensors from ambient energies will reduce maintenance costs, and improve the sensor network performance. This work reports on vibration levels detected in railway and road tunnels as a potential energy source for embedded sensors. The measurement results showed that the vibrations at any location in the road tunnel and at the wall in the railway tunnel are too small for useful vibration harvesting. In contrast, the railway sleeper features usable vibrations and sufficient mounting space. For this application site, a robust piezoelectric vibration harvester was designed and equipped with a power interface circuit. Within the field test, it is demonstrated that sufficient energy is harvested to supply a microcontroller with a radio frequency (RF) interface.
Boundary Work and Power in the Controversy over Therapeutic Touch in Finnish Nursing Science
ERIC Educational Resources Information Center
Vuolanto, Pia
2015-01-01
The boundary work approach has been established as one of the main ways to study controversies in science. However, it has been proposed that it does not meet the power dynamics of the scientific field sufficiently. This article concentrates on the intertwining of boundary work and power. It combines the boundary work approach developed by Thomas…
Modelling and stability analysis of switching impulsive power systems with multiple equilibria
NASA Astrophysics Data System (ADS)
Zhu, Liying; Qiu, Jianbin; Chadli, Mohammed
2017-12-01
This paper tries to model power systems accompanied with a series of faults in the form of switched impulsive Hamiltonian systems (SIHSs) with multiple equilibria (ME) and unstable subsystems (US), and then analyze long-term stability issues of the power systems from the viewpoint of mathematics. According to the complex phenomena of switching actions of stages and generators, impulses of state, and existence of multiple equilibria, this paper first introduces an SIHS with ME and US to formulate a switching impulsive power system composed of an active generator, a standby generator, and an infinite load. Then, based on special system structures, a unique compact region containing all ME is determined, and novel stability concepts of region stability (RS), asymptotic region stability (ARS), and exponential region stability (ERS) are defined for such SIHS with respect to the region. Third, based on the introduced stability concepts, this paper proposes a necessary and sufficient condition of RS and ARS and a sufficient condition of ERS for the power system with respect to the region via the maximum energy function method. Finally, numerical simulations are carried out for a power system to show the effectiveness and practicality of the obained novel results.
1981-09-01
power supplies of the transponder to pro - vide a maximum 23.5 dBW power output. Tables 3-5 and 3-6 present the cost development for this configuration...configurations studied the cavity oscillator tube provides the necessary output characteristics for proper operation of the DABS transponder. Power supplies ...however, are affected by each configuration. The power supply was designed to provide 141 watts peak power at the antenna and sufficient capacity in
Photovoltaic power system for a lunar base
NASA Astrophysics Data System (ADS)
Karia, Kris
An assessment is provided of the viability of using photovoltaic power technology for lunar base application during the initial phase of the mission. The initial user power demands were assumed to be 25 kW (daytime) and 12.5 kW (night time). The effect of lunar adverse environmental conditions were also considered in deriving the photovoltaic power system concept. The solar cell array was found to impose no more design constraints than those solar arrays currently being designed for spacecraft and the Space Station Freedom. The long lunar night and the need to store sufficient energy to sustain a lunar facility during this period was found to be a major design driver. A photovoltaic power system concept was derived using high efficiency thin GaAs solar cells on a deployable flexible Kapton blanket. The solar array design was sized to generate sufficient power for daytime use and for a regenerative fuel cell (RFC) energy storage system to provide power during the night. Solar array sun-tracking is also proposed to maximize the array power output capability. The system launch mass was estimated to be approximately 10 metric tons. For mission application of photovoltaic technology other issues have to be addressed including the constraints imposed by launch vehicle, safety, and cost. For the initial phase of the mission a photovoltaic power system offers a safe option.
46 CFR 112.05-1 - Purpose; preemptive effect.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING EMERGENCY LIGHTING AND POWER SYSTEMS General § 112.05-1 Purpose; preemptive effect. (a) The purpose of this part is to ensure a dependable, independent, and dedicated emergency power source with sufficient capacity to supply...
46 CFR 112.05-1 - Purpose; preemptive effect.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING EMERGENCY LIGHTING AND POWER SYSTEMS General § 112.05-1 Purpose; preemptive effect. (a) The purpose of this part is to ensure a dependable, independent, and dedicated emergency power source with sufficient capacity to supply...
46 CFR 112.05-1 - Purpose; preemptive effect.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING EMERGENCY LIGHTING AND POWER SYSTEMS General § 112.05-1 Purpose; preemptive effect. (a) The purpose of this part is to ensure a dependable, independent, and dedicated emergency power source with sufficient capacity to supply...
ERIC Educational Resources Information Center
Wong, Kelvin; Neves, Ana; Negreiros, Joao
2017-01-01
University students in Macao are required to attend computer literacy courses to raise their basic skills levels and knowledge as part of their literacy foundation. Still, teachers frequently complain about the weak IT skills of many students, suggesting that most of them may not be benefiting sufficiently from their computer literacy courses.…
System-wide power management control via clock distribution network
Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.
2015-05-19
An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.
Reducing power consumption while performing collective operations on a plurality of compute nodes
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2011-10-18
Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.
2007-09-01
devices such as klystrons , magnetrons, and traveling wave tubes. These microwave devices produce high power levels but may have limited bandwidths [20...diagram. The specific arrangement of components within a RADAR transmitter varies with operational specifications. Two options exist to produce high power ...cascading to generate sufficient power [20]. The second option to generate high power levels is to replace RF oscillators and amplifiers with microwave
Solar Power Generation in Extreme Space Environments
NASA Technical Reports Server (NTRS)
Elliott, Frederick W.; Piszczor, Michael F.
2016-01-01
The exploration of space requires power for guidance, navigation, and control; instrumentation; thermal control; communications and data handling; and many subsystems and activities. Generating sufficient and reliable power in deep space through the use of solar arrays becomes even more challenging as solar intensity decreases and high radiation levels begin to degrade the performance of photovoltaic devices. The Extreme Environments Solar Power (EESP) project goal is to develop advanced photovoltaic technology to address these challenges.
NASA Astrophysics Data System (ADS)
Bauer, Sebastian; Suchaneck, Andre; Puente León, Fernando
2014-01-01
Depending on the actual battery temperature, electrical power demands in general have a varying impact on the life span of a battery. As electrical energy provided by the battery is needed to temper it, the question arises at which temperature which amount of energy optimally should be utilized for tempering. Therefore, the objective function that has to be optimized contains both the goal to maximize life expectancy and to minimize the amount of energy used for obtaining the first goal. In this paper, Pontryagin's maximum principle is used to derive a causal control strategy from such an objective function. The derivation of the causal strategy includes the determination of major factors that rule the optimal solution calculated with the maximum principle. The optimization is calculated offline on a desktop computer for all possible vehicle parameters and major factors. For the practical implementation in the vehicle, it is sufficient to have the values of the major factors determined only roughly in advance and the offline calculation results available. This feature sidesteps the drawback of several optimization strategies that require the exact knowledge of the future power demand. The resulting strategy's application is not limited to batteries in electric vehicles.
The Penn State ``Cyber Wind Facility''
NASA Astrophysics Data System (ADS)
Brasseur, James; Vijayakumar, Ganesh; Lavely, Adam; Nandi, Tarak; Jayaraman, Balaji; Jha, Pankaj; Dunbar, Alex; Motta-Mena, Javier; Haupt, Sue; Craven, Brent; Campbell, Robert; Schmitz, Sven; Paterson, Eric
2012-11-01
We describe development and results from a first generation Penn State ``Cyber Wind Facility'' (CWF). The aim of the CWF program is to develop and validate a computational ``facility'' that, in the most powerful HPC environments, will be basis for the design and implementation of cyber ``experiments'' at a level of complexity, fidelity and resolution to be treated similarly to field experiments on wind turbines operating in true atmospheric environments. We see cyber experiments as complimentary to field experiments in the sense that, whereas field data can record over ranges of events not representable in the cyber environment, with sufficient resolution, numerical accuracy, and HPC power, it is theoretically possible to collect cyber data from more true, albeit canonical, atmospheric environments can produce data from extraordinary numbers of sensors impossible to obtain in the field. I will describe our first generation CWF, from which we have quantified and analyzed useful details of the interactions between atmospheric turbulence and wind turbine loadings for an infinitely stiff commercial-scale turbine rotor in a canonical convective daytime atmospheric boundary layer over horizontally homogeneous rough flat terrain. Supported by the DOE Offshore Initiative and the National Science Foundation.
In Vivo Simulator for Microwave Treatment
NASA Technical Reports Server (NTRS)
Arndt, G. Dickey (Inventor); Carl, James R. (Inventor); Raffoul, George W. (Inventor); Karasack, Vincent G. (Inventor); Pacifico, Antonio (Inventor); Pieper, Carl F. (Inventor)
2001-01-01
Method and apparatus are provided for propagating microwave energy into heart tissues to produce a desired temperature profile therein at tissue depths sufficient for thermally ablating arrhythmogenic cardiac tissue to treat ventricular tachycardia and other arrhythmias while preventing excessive heating of surrounding tissues, organs, and blood. A wide bandwidth double-disk antenna is effective for this purpose over a bandwidth of about 6 GHz. A computer simulation provides initial screening capabilities for an antenna such as antenna. frequency, power level, and power application duration. The simulation also allows optimization of techniques for specific patients or conditions. In operation, microwave energy between about 1 GHz and 12 GHz is applied to monopole microwave radiator having a surface wave limiter. A test setup provides physical testing of microwave radiators to determine the temperature profile created in actual heart tissue or ersatz heart tissue. Saline solution pumped over the heart tissue with a peristaltic pump simulates blood flow. Optical temperature sensors disposed at various tissue depths within the heart tissue detect the temperature profile without creating any electromagnetic interference. The method may be used to produce a desired temperature profile in other body tissues reachable by catheter such as tumors and the like.
Transcatheter Antenna For Microwave Treatment
NASA Technical Reports Server (NTRS)
Arndt, G. Dickey (Inventor); Carl, James R. (Inventor); Raffoul, George W. (Inventor); Karasack, Vincent G. (Inventor); Pacifico, Antonio (Inventor); Pieper, Carl F. (Inventor)
2000-01-01
Method and apparatus are provided for propagating microwave energy into heart tissues to produce a desired temperature profile therein at tissue depths sufficient for thermally ablating arrhythmogenic cardiac tissue to treat ventricular tachycardia and other arrhythmias while preventing excessive heating of surrounding tissues, organs, and blood. A wide bandwidth double-disk antenna is effective for this purpose over a bandwidth of about six gigahertz. A computer simulation provides initial screening capabilities for an antenna such as antenna, frequency, power level, and power application duration. The simulation also allows optimization of techniques for specific patients or conditions. In operation, microwave energy between about 1 Gigahertz and 12 Gigahertz is applied to monopole microwave radiation having a surface wave limiter. A test setup provides physical testing of microwave radiators to determine the temperature profile created in actual heart tissue or ersatz heart tissue. Saline solution pumped over the heart tissue with a peristaltic pump simulates blood flow. Optical temperature sensors disposed at various tissue depths within the heart tissue detect the temperature profile without creating any electromagnetic interference. The method may he used to produce a desired temperature profile in other body tissues reachable by catheter such as tumors and the like.
Microwave Treatment for Cardiac Arrhythmias
NASA Technical Reports Server (NTRS)
Arndt, G. Dickey (Inventor); Carl, James R. (Inventor); Raffoul, George W. (Inventor); Pacifico, Antonio (Inventor)
1999-01-01
Method and apparatus are provided for propagating microwave energy into heart tissues to produce a desired temperature profile therein at tissue depths sufficient for thermally ablating arrhythmogenic cardiac tissue to treat ventricular tachycardia and other arrhythmias while preventing excessive heating of surrounding tissues, organs, and blood. A wide bandwidth double-disk antenna is effective for this purpose over a bandwidth of about six gigahertz. A computer simulation provides initial screening capabilities for an antenna such as antenna, frequency, power level, and power application duration. The simulation also allows optimization of techniques for specific patients or conditions. In operation, microwave energy between about 1 Gigahertz and 12 Gigahertz is applied to monopole microwave radiator having a surface wave limiter. A test setup provides physical testing of microwave radiators to determine the temperature profile created in actual heart tissue or ersatz heart tissue. Saline solution pumped over the heart tissue with a peristaltic pump simulates blood flow. Optical temperature sensors disposed at various tissue depths within the heart tissue detect the temperature profile without creating any electromagnetic interference. The method may be used to produce a desired temperature profile in other body tissues reachable by catheter such as tumors and the like.
High-frequency signal and noise estimates of CSR GRACE RL04
NASA Astrophysics Data System (ADS)
Bonin, Jennifer A.; Bettadpur, Srinivas; Tapley, Byron D.
2012-12-01
A sliding window technique is used to create daily-sampled Gravity Recovery and Climate Experiment (GRACE) solutions with the same background processing as the official CSR RL04 monthly series. By estimating over shorter time spans, more frequent solutions are made using uncorrelated data, allowing for higher frequency resolution in addition to daily sampling. Using these data sets, high-frequency GRACE errors are computed using two different techniques: assuming the GRACE high-frequency signal in a quiet area of the ocean is the true error, and computing the variance of differences between multiple high-frequency GRACE series from different centers. While the signal-to-noise ratios prove to be sufficiently high for confidence at annual and lower frequencies, at frequencies above 3 cycles/year the signal-to-noise ratios in the large hydrological basins looked at here are near 1.0. Comparisons with the GLDAS hydrological model and high frequency GRACE series developed at other centers confirm CSR GRACE RL04's poor ability to accurately and reliably measure hydrological signal above 3-9 cycles/year, due to the low power of the large-scale hydrological signal typical at those frequencies compared to the GRACE errors.
Computational analysis on plug-in hybrid electric motorcycle chassis
NASA Astrophysics Data System (ADS)
Teoh, S. J.; Bakar, R. A.; Gan, L. M.
2013-12-01
Plug-in hybrid electric motorcycle (PHEM) is an alternative to promote sustainability lower emissions. However, the PHEM overall system packaging is constrained by limited space in a motorcycle chassis. In this paper, a chassis applying the concept of a Chopper is analysed to apply in PHEM. The chassis 3dimensional (3D) modelling is built with CAD software. The PHEM power-train components and drive-train mechanisms are intergraded into the 3D modelling to ensure the chassis provides sufficient space. Besides that, a human dummy model is built into the 3D modelling to ensure the rider?s ergonomics and comfort. The chassis 3D model then undergoes stress-strain simulation. The simulation predicts the stress distribution, displacement and factor of safety (FOS). The data are used to identify the critical point, thus suggesting the chassis design is applicable or need to redesign/ modify to meet the require strength. Critical points mean highest stress which might cause the chassis to fail. This point occurs at the joints at triple tree and bracket rear absorber for a motorcycle chassis. As a conclusion, computational analysis predicts the stress distribution and guideline to develop a safe prototype chassis.
Parameter identification of JONSWAP spectrum acquired by airborne LIDAR
NASA Astrophysics Data System (ADS)
Yu, Yang; Pei, Hailong; Xu, Chengzhong
2017-12-01
In this study, we developed the first linear Joint North Sea Wave Project (JONSWAP) spectrum (JS), which involves a transformation from the JS solution to the natural logarithmic scale. This transformation is convenient for defining the least squares function in terms of the scale and shape parameters. We identified these two wind-dependent parameters to better understand the wind effect on surface waves. Due to its efficiency and high-resolution, we employed the airborne Light Detection and Ranging (LIDAR) system for our measurements. Due to the lack of actual data, we simulated ocean waves in the MATLAB environment, which can be easily translated into industrial programming language. We utilized the Longuet-Higgin (LH) random-phase method to generate the time series of wave records and used the fast Fourier transform (FFT) technique to compute the power spectra density. After validating these procedures, we identified the JS parameters by minimizing the mean-square error of the target spectrum to that of the estimated spectrum obtained by FFT. We determined that the estimation error is relative to the amount of available wave record data. Finally, we found the inverse computation of wind factors (wind speed and wind fetch length) to be robust and sufficiently precise for wave forecasting.
NASA Astrophysics Data System (ADS)
Thylwe, Karl-Erik; McCabe, Patrick
2012-04-01
The classical amplitude-phase method due to Milne, Wilson, Young and Wheeler in the 1930s is known to be a powerful computational tool for determining phase shifts and energy eigenvalues in cases where a sufficiently slowly varying amplitude function can be found. The key for the efficient computations is that the original single-state radial Schrödinger equation is transformed to a nonlinear equation, the Milne equation. Such an equation has solutions that may or may not oscillate, depending on boundary conditions, which then requires a robust recipe for locating the (optimal) ‘almost constant’ solutions for its use in the method. For scattering problems the solutions of the amplitude equations always approach constants as the radial distance r tends to infinity, and there is no problem locating the ‘optimal’ amplitude functions from a low-order semiclassical approximation. In the present work, the amplitude-phase approach is generalized to two coupled Schrödinger equations similar to an earlier generalization to radial Dirac equations. The original scalar amplitude then becomes a vector quantity, and the original Milne equation is generalized accordingly. Numerical applications to resonant electron-atom scattering are illustrated.
Ion Clouds in the Inductively Coupled Plasma Torch: A Closer Look through Computations.
Aghaei, Maryam; Lindner, Helmut; Bogaerts, Annemie
2016-08-16
We have computationally investigated the introduction of copper elemental particles in an inductively coupled plasma torch connected to a sampling cone, including for the first time the ionization of the sample. The sample is inserted as liquid particles, which are followed inside the entire torch, i.e., from the injector inlet up to the ionization and reaching the sampler. The spatial position of the ion clouds inside the torch as well as detailed information on the copper species fluxes at the position of the sampler orifice and the exhausts of the torch are provided. The effect of on- and off-axis injection is studied. We clearly show that the ion clouds of on-axis injected material are located closer to the sampler with less radial diffusion. This guarantees a higher transport efficiency through the sampler cone. Moreover, our model reveals the optimum ranges of applied power and flow rates, which ensure the proper position of ion clouds inside the torch, i.e., close enough to the sampler to increase the fraction that can enter the mass spectrometer and with minimum loss of material toward the exhausts as well as a sufficiently high plasma temperature for efficient ionization.
The tensor distribution function.
Leow, A D; Zhu, S; Zhan, L; McMahon, K; de Zubicaray, G I; Meredith, M; Wright, M J; Toga, A W; Thompson, P M
2009-01-01
Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.
Schmidt, James R; De Houwer, Jan; Rothermund, Klaus
2016-12-01
The current paper presents an extension of the Parallel Episodic Processing model. The model is developed for simulating behaviour in performance (i.e., speeded response time) tasks and learns to anticipate both how and when to respond based on retrieval of memories of previous trials. With one fixed parameter set, the model is shown to successfully simulate a wide range of different findings. These include: practice curves in the Stroop paradigm, contingency learning effects, learning acquisition curves, stimulus-response binding effects, mixing costs, and various findings from the attentional control domain. The results demonstrate several important points. First, the same retrieval mechanism parsimoniously explains stimulus-response binding, contingency learning, and practice effects. Second, as performance improves with practice, any effects will shrink with it. Third, a model of simple learning processes is sufficient to explain phenomena that are typically (but perhaps incorrectly) interpreted in terms of higher-order control processes. More generally, we argue that computational models with a fixed parameter set and wider breadth should be preferred over those that are restricted to a narrow set of phenomena. Copyright © 2016 Elsevier Inc. All rights reserved.
Multi-Channel RF System for MRI-Guided Transurethral Ultrasound Thermal Therapy
NASA Astrophysics Data System (ADS)
Yak, Nicolas; Asselin, Matthew; Chopra, Rajiv; Bronskill, Michael
2009-04-01
MRI-guided transurethral ultrasound thermal therapy is an approach to treating localized prostate cancer which targets precise deposition of thermal energy within a confined region of the gland. This treatment requires a system incorporating a heating applicator with multiple planar ultrasound transducers and associated RF electronics to control individual elements independently in order to achieve accurate 3D treatment. We report the design, construction, and characterization of a prototype multi-channel system capable of controlling 16 independent RF signals for a 16-element heating applicator. The main components are a control computer, microcontroller, and a 16-channel signal generator with 16 amplifiers, each incorporating a low-pass filter and transmitted/reflected power detection circuit. Each channel can deliver from 0.5 to 10 W of electrical power and good linearity from 3 to 12 MHz. Harmonic RF signals near the Larmor frequency of a 1.5 T MRI were measured to be below -30 dBm and heating experiments within the 1.5 T MR system showed no significant decrease in SNR of the temperature images. The frequency and power for all 16 channels could be changed in less than 250 ms, which was sufficiently rapid for proper performance of the control algorithms. A common backplane design was chosen which enabled an inexpensive, modular approach for each channel resulting in an overall system with minimal footprint.
Predicting the impact of chromium on flow-accelerated corrosion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chexal, B.; Goyette, L.F.; Horowitz, J.S.
1996-12-01
Flow-Accelerated Corrosion (FAC) continues to cause problems in nuclear and fossil power plants. Many experiments have been performed to understand the mechanism of FAC. For approximately twenty years, it has ben widely recognized that the presence of small amounts of chromium will reduce the rate of FAC. This effect was quantified in the eighties by research performed in France, Germany and the Netherlands. The results of this research has been incorporated into the computer-based tools used by utility engineers to deal with this issue. For some time, plant data from Diablo Canyon has suggested that the existing correlations relating themore » concentration of chromium to the rate of FAC are conservative. Laboratory examinations have supported this observation. It appears that the existing correlations fail to capture a change in mechanism from a FAC process with linear kinetics to a general corrosion process with parabolic kinetics. This change in mechanism occurs at a chromium level of approximately 0.1%, within the allowable alloy range of typical carbon steel (ASTM/ASME A106 Grade B) used in power piping in most domestic plants. It has been difficult to obtain plant data that has sufficient chromium to develop a new correlation. Data from Diablo Canyon and the Dukovany Power Plant in the Czech Republic will be used to develop a new chromium correlation for predicting FAC rate.« less
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
Provably Secure Password-based Authentication in TLS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdalla, Michel; Emmanuel, Bresson; Chevassut, Olivier
2005-12-20
In this paper, we show how to design an efficient, provably secure password-based authenticated key exchange mechanism specifically for the TLS (Transport Layer Security) protocol. The goal is to provide a technique that allows users to employ (short) passwords to securely identify themselves to servers. As our main contribution, we describe a new password-based technique for user authentication in TLS, called Simple Open Key Exchange (SOKE). Loosely speaking, the SOKE ciphersuites are unauthenticated Diffie-Hellman ciphersuites in which the client's Diffie-Hellman ephemeral public value is encrypted using a simple mask generation function. The mask is simply a constant value raised tomore » the power of (a hash of) the password.The SOKE ciphersuites, in advantage over previous pass-word-based authentication ciphersuites for TLS, combine the following features. First, SOKE has formal security arguments; the proof of security based on the computational Diffie-Hellman assumption is in the random oracle model, and holds for concurrent executions and for arbitrarily large password dictionaries. Second, SOKE is computationally efficient; in particular, it only needs operations in a sufficiently large prime-order subgroup for its Diffie-Hellman computations (no safe primes). Third, SOKE provides good protocol flexibility because the user identity and password are only required once a SOKE ciphersuite has actually been negotiated, and after the server has sent a server identity.« less
Reliability and maintainability assessment factors for reliable fault-tolerant systems
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1984-01-01
A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.
Exploiting Identical Generators in Unit Commitment
Knueven, Ben; Ostrowski, Jim; Watson, Jean -Paul
2017-12-14
Here, we present sufficient conditions under which thermal generators can be aggregated in mixed-integer linear programming (MILP) formulations of the unit commitment (UC) problem, while maintaining feasibility and optimality for the original disaggregated problem. Aggregating thermal generators with identical characteristics (e.g., minimum/maximum power output, minimum up/down-time, and cost curves) into a single unit reduces redundancy in the search space induced by both exact symmetry (permutations of generator schedules) and certain classes of mutually non-dominated solutions. We study the impact of aggregation on two large-scale UC instances, one from the academic literature and another based on real-world operator data. Our computationalmore » tests demonstrate that when present, identical generators can negatively affect the performance of modern MILP solvers on UC formulations. Further, we show that our reformation of the UC MILP through aggregation is an effective method for mitigating this source of computational difficulty.« less
Characterizing Conformational Dynamics of Proteins Using Evolutionary Couplings.
Feng, Jiangyan; Shukla, Diwakar
2018-01-25
Understanding of protein conformational dynamics is essential for elucidating molecular origins of protein structure-function relationship. Traditionally, reaction coordinates, i.e., some functions of protein atom positions and velocities have been used to interpret the complex dynamics of proteins obtained from experimental and computational approaches such as molecular dynamics simulations. However, it is nontrivial to identify the reaction coordinates a priori even for small proteins. Here, we evaluate the power of evolutionary couplings (ECs) to capture protein dynamics by exploring their use as reaction coordinates, which can efficiently guide the sampling of a conformational free energy landscape. We have analyzed 10 diverse proteins and shown that a few ECs are sufficient to characterize complex conformational dynamics of proteins involved in folding and conformational change processes. With the rapid strides in sequencing technology, we expect that ECs could help identify reaction coordinates a priori and enhance the sampling of the slow dynamical process associated with protein folding and conformational change.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Babendreier, Justin E.; Castleton, Karl J.
2005-08-01
Elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-medium constructs driven by a unique set of site-specific data. Quantitative assessment of integrated, multimedia models that simulate hundreds of sites, spanning multiple geographical and ecological regions, will ultimately require a comparative approach using several techniques, coupled with sufficient computational power. The Framework for Risk Analysis in Multimedia Environmental Systems - Multimedia, Multipathway, and Multireceptor Risk Assessment (FRAMES-3MRA) is an important software model being developed by the United States Environmental Protection Agency for use in risk assessment of hazardous waste management facilities. The 3MRAmore » modeling system includes a set of 17 science modules that collectively simulate release, fate and transport, exposure, and risk associated with hazardous contaminants disposed of in land-based waste management units (WMU) .« less
Automated medication reconciliation and complexity of care transitions.
Silva, Pamela A Bozzo; Bernstam, Elmer V; Markowitz, Eliz; Johnson, Todd R; Zhang, Jiajie; Herskovic, Jorge R
2011-01-01
Medication reconciliation is a National Patient Safety Goal (NPSG) from The Joint Commission (TJC) that entails reviewing all medications a patient takes after a health care transition. Medication reconciliation is a resource-intensive, error-prone task, and the resources to accomplish it may not be routinely available. Computer-based methods have the potential to overcome these barriers. We designed and explored a rule-based medication reconciliation algorithm to accomplish this task across different healthcare transitions. We tested our algorithm on a random sample of 94 transitions from the Clinical Data Warehouse at the University of Texas Health Science Center at Houston. We found that the algorithm reconciled, on average, 23.4% of the potentially reconcilable medications. Our study did not have sufficient statistical power to establish whether the kind of transition affects reconcilability. We conclude that automated reconciliation is possible and will help accomplish the NPSG.
Intercalated water layers promote thermal dissipation at bio-nano interfaces.
Wang, Yanlei; Qin, Zhao; Buehler, Markus J; Xu, Zhiping
2016-09-23
The increasing interest in developing nanodevices for biophysical and biomedical applications results in concerns about thermal management at interfaces between tissues and electronic devices. However, there is neither sufficient knowledge nor suitable tools for the characterization of thermal properties at interfaces between materials of contrasting mechanics, which are essential for design with reliability. Here we use computational simulations to quantify thermal transfer across the cell membrane-graphene interface. We find that the intercalated water displays a layered order below a critical value of ∼1 nm nanoconfinement, mediating the interfacial thermal coupling, and efficiently enhancing the thermal dissipation. We thereafter develop an analytical model to evaluate the critical value for power generation in graphene before significant heat is accumulated to disturb living tissues. These findings may provide a basis for the rational design of wearable and implantable nanodevices in biosensing and thermotherapic treatments where thermal dissipation and transport processes are crucial.
Prediction of binary nanoparticle superlattices from soft potentials
Horst, Nathan; Travesset, Alex
2016-01-07
Driven by the hypothesis that a sufficiently continuous short-ranged potential is able to account for shell flexibility and phonon modes and therefore provides a more realistic description of nanoparticle interactions than a hard sphere model, we compute the solid phase diagram of particles of different radii interacting with an inverse power law potential. From a pool of 24 candidate lattices, the free energy is optimized with respect to additional internal parameters and the p-exponent, determining the short-range properties of the potential, is varied between p = 12 and p = 6. The phase diagrams contain the phases found in ongoingmore » self-assembly experiments, including DNA programmable self-assembly and nanoparticles with capping ligands assembled by evaporation from an organic solvent. Thus, the resulting phase diagrams can be mapped quantitatively to existing experiments as a function of only two parameters: Nanoparticle radius ratio (γ) and softness asymmetry.« less
Exploiting Identical Generators in Unit Commitment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knueven, Ben; Ostrowski, Jim; Watson, Jean -Paul
Here, we present sufficient conditions under which thermal generators can be aggregated in mixed-integer linear programming (MILP) formulations of the unit commitment (UC) problem, while maintaining feasibility and optimality for the original disaggregated problem. Aggregating thermal generators with identical characteristics (e.g., minimum/maximum power output, minimum up/down-time, and cost curves) into a single unit reduces redundancy in the search space induced by both exact symmetry (permutations of generator schedules) and certain classes of mutually non-dominated solutions. We study the impact of aggregation on two large-scale UC instances, one from the academic literature and another based on real-world operator data. Our computationalmore » tests demonstrate that when present, identical generators can negatively affect the performance of modern MILP solvers on UC formulations. Further, we show that our reformation of the UC MILP through aggregation is an effective method for mitigating this source of computational difficulty.« less
Numerical simulations of unsteady transonic flow in diffusers
NASA Technical Reports Server (NTRS)
Liou, M.-S.; Coakley, T. J.
1982-01-01
Forced and naturally occurring, self-sustaining oscillations of transonic flows in two-dimensional diffusers were computed using MacCormack's hybrid method. Depending upon the shock strengths and the area ratios, the flow was fully attached or separated by either the shock or the adverse pressure gradient associated with the enlarging diffuser area. In the case of forced oscillations, a sinusoidal plane pressure wave at frequency 300 Hz was prescribed at the exit. A sufficiently large amount of data were acquired and Fourier analyzed. The distrbutions of time-mean pressures, the power spectral density, and the amplitude with phase angle along the top wall and in the core region were determined. Comparison with experimental results for the forced oscillation generally gave very good agreement; some success was achieved for the case of self-sustaining oscillation despite substantial three-dimensionality in the test. An observation of the sequence of self-sustaining oscillations was given.
Prediction of Binary Nanoparticle Superlattices from Soft Potentials
NASA Astrophysics Data System (ADS)
Horst, Nathan; Travesset, Alex
Driven by the hypothesis that a sufficiently continuous short-ranged potential is able to account for shell flexibility and phonon modes and therefore provides a more realistic description of nanoparticle interactions than a hard sphere model, we compute the solid phase diagram of particles of different radii interacting with an inverse power law potential. We explore 24 candidate lattices where the p-exponent, determining the short-range properties of the potential, is varied between p=12 and p=6, and optimize the free energy with respect to additional internal parameters. The phase diagrams contain the phases found in ongoing self-assembly experiments, including DNA programmable self-assembly and nanoparticles with capping ligands assembled by evaporation from an organic solvent. The resulting phase diagrams can be mapped quantitatively to existing experiments as a function of only two parameters: nanoparticle radius ratio (γ) and softness asymmetry (SA). Supported by DOE under Contract Number DE-AC02-07CH11358.
Prediction of binary nanoparticle superlattices from soft potentials
NASA Astrophysics Data System (ADS)
Horst, Nathan; Travesset, Alex
2016-01-01
Driven by the hypothesis that a sufficiently continuous short-ranged potential is able to account for shell flexibility and phonon modes and therefore provides a more realistic description of nanoparticle interactions than a hard sphere model, we compute the solid phase diagram of particles of different radii interacting with an inverse power law potential. From a pool of 24 candidate lattices, the free energy is optimized with respect to additional internal parameters and the p-exponent, determining the short-range properties of the potential, is varied between p = 12 and p = 6. The phase diagrams contain the phases found in ongoing self-assembly experiments, including DNA programmable self-assembly and nanoparticles with capping ligands assembled by evaporation from an organic solvent. The resulting phase diagrams can be mapped quantitatively to existing experiments as a function of only two parameters: Nanoparticle radius ratio (γ) and softness asymmetry.
Spiricheva, T V; Vrezhesinskaia, O A; Beketova, N A; Pereverzeva, O G; Kosheleva, O V; Kharitonchik, L A; Kodentsova, V M; Iudina, A V; Spirichev, V B
2010-01-01
The research of influence of vitamin complexes in the form of a drink or kissel on vitamin sufficiency of working persons has been carried out. Long inclusion (6,5 months) in a diet of vitamin drinks containing about 80% from recommended daily consumption of vitamins, was accompanied by trustworthy improvement of vitamins C and B6 sufficiency and prevention of seasonal deterioration of beta-carotene status. As initially surveyed have been well provided with vitamins A and E, their blood serum level increase had not occurred.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., Ro-Ro operations, and § 1918.25). 9 [Reserved] (a) Traffic control system. An organized system of... simultaneous use of the ramp by vehicles and pedestrians. (d) Ramp maintenance. Ramps shall be properly...: (1) Sufficient power to ascend ramp inclines safely; and (2) Sufficient braking capacity to descend...
Autonomous mobile robot research using the HERMIES-III robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.; Beckerman, M.; Spelt, P.F.
1989-01-01
This paper reports on the status and future directions in the research, development and experimental validation of intelligent control techniques for autonomous mobile robots using the HERMIES-III robot at the Center for Engineering Systems Advanced research (CESAR) at Oak Ridge National Laboratory (ORNL). HERMIES-III is the fourth robot in a series of increasingly more sophisticated and capable experimental test beds developed at CESAR. HERMIES-III is comprised of a battery powered, onmi-directional wheeled platform with a seven degree-of-freedom manipulator arm, video cameras, sonar range sensors, laser imaging scanner and a dual computer system containing up to 128 NCUBE nodes in hypercubemore » configuration. All electronics, sensors, computers, and communication equipment required for autonomous operation of HERMIES-III are located on board along with sufficient battery power for three to four hours of operation. The paper first provides a more detailed description of the HERMIES-III characteristics, focussing on the new areas of research and demonstration now possible at CESAR with this new test-bed. The initial experimental program is then described with emphasis placed on autonomous performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES- III). The paper concludes with a discussion of the integration problems and safety considerations necessarily arising from the set-up of an experimental program involving human-scale, multi-autonomous mobile robots performance. 10 refs., 3 figs.« less
18 CFR 33.5 - Proposed accounting entries.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Proposed accounting entries. 33.5 Section 33.5 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... present proposed accounting entries showing the effect of the transaction with sufficient detail to...
18 CFR 33.5 - Proposed accounting entries.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Proposed accounting entries. 33.5 Section 33.5 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... present proposed accounting entries showing the effect of the transaction with sufficient detail to...
18 CFR 33.5 - Proposed accounting entries.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Proposed accounting entries. 33.5 Section 33.5 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... present proposed accounting entries showing the effect of the transaction with sufficient detail to...
18 CFR 33.5 - Proposed accounting entries.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Proposed accounting entries. 33.5 Section 33.5 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION... present proposed accounting entries showing the effect of the transaction with sufficient detail to...
76 FR 42567 - Reporting Requirements for U.S. Providers of International Telecommunications Services
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
... or transfers, unjust enrichment issues are implicated. 25. Wireless Telecommunications Carriers... they: (1) Have sufficient market power at the foreign end of an international route to affect... concerns that overseas incumbent or monopoly telecommunications providers might use their market power to...
14 CFR 29.1357 - Circuit protective devices.
Code of Federal Regulations, 2013 CFR
2013-01-01
... devices in the generating system must be designed to de-energize and disconnect faulty power sources and power transmission equipment from their associated buses with sufficient rapidity to provide protection... be designed so that, when an overload or circuit fault exists, it will open the circuit regardless of...
14 CFR 29.1357 - Circuit protective devices.
Code of Federal Regulations, 2014 CFR
2014-01-01
... devices in the generating system must be designed to de-energize and disconnect faulty power sources and power transmission equipment from their associated buses with sufficient rapidity to provide protection... be designed so that, when an overload or circuit fault exists, it will open the circuit regardless of...
14 CFR 29.1357 - Circuit protective devices.
Code of Federal Regulations, 2011 CFR
2011-01-01
... devices in the generating system must be designed to de-energize and disconnect faulty power sources and power transmission equipment from their associated buses with sufficient rapidity to provide protection... be designed so that, when an overload or circuit fault exists, it will open the circuit regardless of...
14 CFR 29.1357 - Circuit protective devices.
Code of Federal Regulations, 2012 CFR
2012-01-01
... devices in the generating system must be designed to de-energize and disconnect faulty power sources and power transmission equipment from their associated buses with sufficient rapidity to provide protection... be designed so that, when an overload or circuit fault exists, it will open the circuit regardless of...
Solar powered actuator with continuously variable auxiliary power control
NASA Technical Reports Server (NTRS)
Nola, F. J. (Inventor)
1984-01-01
A solar powered system is disclosed in which a load such as a compressor is driven by a main induction motor powered by a solar array. An auxiliary motor shares the load with the solar powered motor in proportion to the amount of sunlight available, is provided with a power factor controller for controlling voltage applied to the auxiliary motor in accordance with the loading on that motor. In one embodiment, when sufficient power is available from the solar cell, the auxiliary motor is driven as a generator by excess power from the main motor so as to return electrical energy to the power company utility lines.
Optoelectronic Devices Based on Novel Semiconductor Structures
2006-06-14
superlattices 4. TEM study and band -filling effects in quantum-well dots 5. Improvements on tuning ranges and output powers for widely-tunable THz sources...the pump power increases the relative strength for the QW emission in the QWD sample also increases. Eventually at the sufficiently- high pump power ...Ahopelto, Appl. Phys. Lett. 66, 2364 (1995). 5. A monochromatic and high - power THz source tunable in the ranges of 2.7-38.4 ptm and 58.2-3540 ptm for
Active Computer Network Defense: An Assessment
2001-04-01
sufficient base of knowledge in information technology can be assumed to be working on some form of computer network warfare, even if only defensive in...the Defense Information Infrastructure (DII) to attack. Transmission Control Protocol/ Internet Protocol (TCP/IP) networks are inherently resistant to...aims to create this part of information superiority, and computer network defense is one of its fundamental components. Most of these efforts center
NASA Technical Reports Server (NTRS)
1980-01-01
The requirements implementation strategy for first level development of the Integrated Programs for Aerospace Vehicle Design (IPAD) computing system is presented. The capabilities of first level IPAD are sufficient to demonstrated management of engineering data on two computers (CDC CYBER 170/720 and DEC VAX 11/780 computers) using the IPAD system in a distributed network environment.
Self-Powered Wireless Carbohydrate/Oxygen Sensitive Biodevice Based on Radio Signal Transmission
Falk, Magnus; Alcalde, Miguel; Bartlett, Philip N.; De Lacey, Antonio L.; Gorton, Lo; Gutierrez-Sanchez, Cristina; Haddad, Raoudha; Kilburn, Jeremy; Leech, Dónal; Ludwig, Roland; Magner, Edmond; Mate, Diana M.; Conghaile, Peter Ó.; Ortiz, Roberto; Pita, Marcos; Pöller, Sascha; Ruzgas, Tautgirdas; Salaj-Kosla, Urszula; Schuhmann, Wolfgang; Sebelius, Fredrik; Shao, Minling; Stoica, Leonard; Sygmund, Cristoph; Tilly, Jonas; Toscano, Miguel D.; Vivekananthan, Jeevanthi; Wright, Emma; Shleev, Sergey
2014-01-01
Here for the first time, we detail self-contained (wireless and self-powered) biodevices with wireless signal transmission. Specifically, we demonstrate the operation of self-sustained carbohydrate and oxygen sensitive biodevices, consisting of a wireless electronic unit, radio transmitter and separate sensing bioelectrodes, supplied with electrical energy from a combined multi-enzyme fuel cell generating sufficient current at required voltage to power the electronics. A carbohydrate/oxygen enzymatic fuel cell was assembled by comparing the performance of a range of different bioelectrodes followed by selection of the most suitable, stable combination. Carbohydrates (viz. lactose for the demonstration) and oxygen were also chosen as bioanalytes, being important biomarkers, to demonstrate the operation of the self-contained biosensing device, employing enzyme-modified bioelectrodes to enable the actual sensing. A wireless electronic unit, consisting of a micropotentiostat, an energy harvesting module (voltage amplifier together with a capacitor), and a radio microchip, were designed to enable the biofuel cell to be used as a power supply for managing the sensing devices and for wireless data transmission. The electronic system used required current and voltages greater than 44 µA and 0.57 V, respectively to operate; which the biofuel cell was capable of providing, when placed in a carbohydrate and oxygen containing buffer. In addition, a USB based receiver and computer software were employed for proof-of concept tests of the developed biodevices. Operation of bench-top prototypes was demonstrated in buffers containing different concentrations of the analytes, showcasing that the variation in response of both carbohydrate and oxygen biosensors could be monitored wirelessly in real-time as analyte concentrations in buffers were changed, using only an enzymatic fuel cell as a power supply. PMID:25310190
Self-powered wireless carbohydrate/oxygen sensitive biodevice based on radio signal transmission.
Falk, Magnus; Alcalde, Miguel; Bartlett, Philip N; De Lacey, Antonio L; Gorton, Lo; Gutierrez-Sanchez, Cristina; Haddad, Raoudha; Kilburn, Jeremy; Leech, Dónal; Ludwig, Roland; Magner, Edmond; Mate, Diana M; Conghaile, Peter Ó; Ortiz, Roberto; Pita, Marcos; Pöller, Sascha; Ruzgas, Tautgirdas; Salaj-Kosla, Urszula; Schuhmann, Wolfgang; Sebelius, Fredrik; Shao, Minling; Stoica, Leonard; Sygmund, Cristoph; Tilly, Jonas; Toscano, Miguel D; Vivekananthan, Jeevanthi; Wright, Emma; Shleev, Sergey
2014-01-01
Here for the first time, we detail self-contained (wireless and self-powered) biodevices with wireless signal transmission. Specifically, we demonstrate the operation of self-sustained carbohydrate and oxygen sensitive biodevices, consisting of a wireless electronic unit, radio transmitter and separate sensing bioelectrodes, supplied with electrical energy from a combined multi-enzyme fuel cell generating sufficient current at required voltage to power the electronics. A carbohydrate/oxygen enzymatic fuel cell was assembled by comparing the performance of a range of different bioelectrodes followed by selection of the most suitable, stable combination. Carbohydrates (viz. lactose for the demonstration) and oxygen were also chosen as bioanalytes, being important biomarkers, to demonstrate the operation of the self-contained biosensing device, employing enzyme-modified bioelectrodes to enable the actual sensing. A wireless electronic unit, consisting of a micropotentiostat, an energy harvesting module (voltage amplifier together with a capacitor), and a radio microchip, were designed to enable the biofuel cell to be used as a power supply for managing the sensing devices and for wireless data transmission. The electronic system used required current and voltages greater than 44 µA and 0.57 V, respectively to operate; which the biofuel cell was capable of providing, when placed in a carbohydrate and oxygen containing buffer. In addition, a USB based receiver and computer software were employed for proof-of concept tests of the developed biodevices. Operation of bench-top prototypes was demonstrated in buffers containing different concentrations of the analytes, showcasing that the variation in response of both carbohydrate and oxygen biosensors could be monitored wirelessly in real-time as analyte concentrations in buffers were changed, using only an enzymatic fuel cell as a power supply.
Lossy Wavefield Compression for Full-Waveform Inversion
NASA Astrophysics Data System (ADS)
Boehm, C.; Fichtner, A.; de la Puente, J.; Hanzich, M.
2015-12-01
We present lossy compression techniques, tailored to the inexact computation of sensitivity kernels, that significantly reduce the memory requirements of adjoint-based minimization schemes. Adjoint methods are a powerful tool to solve tomography problems in full-waveform inversion (FWI). Yet they face the challenge of massive memory requirements caused by the opposite directions of forward and adjoint simulations and the necessity to access both wavefields simultaneously during the computation of the sensitivity kernel. Thus, storage, I/O operations, and memory bandwidth become key topics in FWI. In this talk, we present strategies for the temporal and spatial compression of the forward wavefield. This comprises re-interpolation with coarse time steps and an adaptive polynomial degree of the spectral element shape functions. In addition, we predict the projection errors on a hierarchy of grids and re-quantize the residuals with an adaptive floating-point accuracy to improve the approximation. Furthermore, we use the first arrivals of adjoint waves to identify "shadow zones" that do not contribute to the sensitivity kernel at all. Updating and storing the wavefield within these shadow zones is skipped, which reduces memory requirements and computational costs at the same time. Compared to check-pointing, our approach has only a negligible computational overhead, utilizing the fact that a sufficiently accurate sensitivity kernel does not require a fully resolved forward wavefield. Furthermore, we use adaptive compression thresholds during the FWI iterations to ensure convergence. Numerical experiments on the reservoir scale and for the Western Mediterranean prove the high potential of this approach with an effective compression factor of 500-1000. Furthermore, it is computationally cheap and easy to integrate in both, finite-differences and finite-element wave propagation codes.
HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.
Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is themore » inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.« less
Gross, Markus; Magar, Vanesa
2016-01-01
In previous work, the authors demonstrated how data from climate simulations can be utilized to estimate regional wind power densities. In particular, it was shown that the quality of wind power densities, estimated from the UPSCALE global dataset in offshore regions of Mexico, compared well with regional high resolution studies. Additionally, a link between surface temperature and moist air density in the estimates was presented. UPSCALE is an acronym for UK on PRACE (the Partnership for Advanced Computing in Europe)—weather-resolving Simulations of Climate for globAL Environmental risk. The UPSCALE experiment was performed in 2012 by NCAS (National Centre for Atmospheric Science)-Climate, at the University of Reading and the UK Met Office Hadley Centre. The study included a 25.6-year, five-member ensemble simulation of the HadGEM3 global atmosphere, at 25km resolution for present climate conditions. The initial conditions for the ensemble runs were taken from consecutive days of a test configuration. In the present paper, the emphasis is placed on the single climate run for a potential future climate scenario in the UPSCALE experiment dataset, using the Representation Concentrations Pathways (RCP) 8.5 climate change scenario. Firstly, some tests were performed to ensure that the results using only one instantiation of the current climate dataset are as robust as possible within the constraints of the available data. In order to achieve this, an artificial time series over a longer sampling period was created. Then, it was shown that these longer time series provided almost the same results than the short ones, thus leading to the argument that the short time series is sufficient to capture the climate. Finally, with the confidence that one instantiation is sufficient, the future climate dataset was analysed to provide, for the first time, a projection of future changes in wind power resources using the UPSCALE dataset. It is hoped that this, in turn, will provide some guidance for wind power developers and policy makers to prepare and adapt for climate change impacts on wind energy production. Although offshore locations around Mexico were used as a case study, the dataset is global and hence the methodology presented can be readily applied at any desired location. PMID:27788208
Gross, Markus; Magar, Vanesa
2016-01-01
In previous work, the authors demonstrated how data from climate simulations can be utilized to estimate regional wind power densities. In particular, it was shown that the quality of wind power densities, estimated from the UPSCALE global dataset in offshore regions of Mexico, compared well with regional high resolution studies. Additionally, a link between surface temperature and moist air density in the estimates was presented. UPSCALE is an acronym for UK on PRACE (the Partnership for Advanced Computing in Europe)-weather-resolving Simulations of Climate for globAL Environmental risk. The UPSCALE experiment was performed in 2012 by NCAS (National Centre for Atmospheric Science)-Climate, at the University of Reading and the UK Met Office Hadley Centre. The study included a 25.6-year, five-member ensemble simulation of the HadGEM3 global atmosphere, at 25km resolution for present climate conditions. The initial conditions for the ensemble runs were taken from consecutive days of a test configuration. In the present paper, the emphasis is placed on the single climate run for a potential future climate scenario in the UPSCALE experiment dataset, using the Representation Concentrations Pathways (RCP) 8.5 climate change scenario. Firstly, some tests were performed to ensure that the results using only one instantiation of the current climate dataset are as robust as possible within the constraints of the available data. In order to achieve this, an artificial time series over a longer sampling period was created. Then, it was shown that these longer time series provided almost the same results than the short ones, thus leading to the argument that the short time series is sufficient to capture the climate. Finally, with the confidence that one instantiation is sufficient, the future climate dataset was analysed to provide, for the first time, a projection of future changes in wind power resources using the UPSCALE dataset. It is hoped that this, in turn, will provide some guidance for wind power developers and policy makers to prepare and adapt for climate change impacts on wind energy production. Although offshore locations around Mexico were used as a case study, the dataset is global and hence the methodology presented can be readily applied at any desired location.
29 CFR 1910.66 - Powered platforms for building maintenance.
Code of Federal Regulations, 2013 CFR
2013-07-01
... used to supply electrical power and/or control current for equipment or to provide voice communication... access to, and egress from, the equipment and sufficient space to conduct necessary maintenance of the... in use; and (vi) An effective two-way voice communication system shall be provided between the...
29 CFR 1910.66 - Powered platforms for building maintenance.
Code of Federal Regulations, 2014 CFR
2014-07-01
... used to supply electrical power and/or control current for equipment or to provide voice communication... access to, and egress from, the equipment and sufficient space to conduct necessary maintenance of the... in use; and (vi) An effective two-way voice communication system shall be provided between the...
46 CFR 197.332 - PVHO-Decompression chambers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... dogs, from both sides of a closed hatch; (e) Have interior illumination sufficient to allow visual... (m) Have a sound-powered headset or telephone as a backup to the communications system required by § 197.328(c) (5) and (6), except when that communications system is a sound-powered system. ...
Power Politics of Family Psychotherapy.
ERIC Educational Resources Information Center
Whitaker, Carl A.
It is postulated that the standard framework for psychotherapy, a cooperative transference neurosis, does not validly carry over to the successful psychotherapy of a two-generation family group. In many disturbed families, the necessary and sufficient dynamics for change must be initiated, controlled, and augmented by a group dynamic power-play,…
29 CFR 1926.303 - Abrasive wheels and tools.
Code of Federal Regulations, 2013 CFR
2013-07-01
... and tools. (a) Power. All grinding machines shall be supplied with sufficient power to maintain the spindle speed at safe levels under all conditions of normal operation. (b) Guarding. (1) Grinding machines..., nut, and outer flange may be exposed on machines designed as portable saws. (c) Use of abrasive wheels...
29 CFR 1926.303 - Abrasive wheels and tools.
Code of Federal Regulations, 2014 CFR
2014-07-01
... and tools. (a) Power. All grinding machines shall be supplied with sufficient power to maintain the spindle speed at safe levels under all conditions of normal operation. (b) Guarding. (1) Grinding machines..., nut, and outer flange may be exposed on machines designed as portable saws. (c) Use of abrasive wheels...
29 CFR 1926.303 - Abrasive wheels and tools.
Code of Federal Regulations, 2012 CFR
2012-07-01
... and tools. (a) Power. All grinding machines shall be supplied with sufficient power to maintain the spindle speed at safe levels under all conditions of normal operation. (b) Guarding. (1) Grinding machines..., nut, and outer flange may be exposed on machines designed as portable saws. (c) Use of abrasive wheels...
Low Power Switching for Antenna Reconfiguration
NASA Technical Reports Server (NTRS)
Bauhahn, Paul E. (Inventor); Becker, Robert C. (Inventor); Meyers, David W. (Inventor); Muldoon, Kelly P. (Inventor)
2008-01-01
Methods and systems for low power switching are provided. In one embodiment, an optical switching system is provided. The system comprises at least one optically controlled switch adapted to maintain one of an open state and a closed state based on an associated light signal; and at least one light source adapted to output the associated light signal to the at least one switch, wherein the at least one light source cycles the light signal on and off, wherein the at least one light source is cycled on for a sufficient duration of time and with a sufficient periodicity to maintain the optically controlled switch in one of an open state and a closed state.
ERIC Educational Resources Information Center
Hunt, Graham
This report discusses the impact of and presents guidelines for developing a computer-aided instructional (CAI) system. The first section discusses CAI in terms of the need for the countries of Asia to increase their economic self-sufficiency. The second section examines various theories on the nature of learning with special attention to the role…
Statistical Learning of Phonetic Categories: Insights from a Computational Approach
ERIC Educational Resources Information Center
McMurray, Bob; Aslin, Richard N.; Toscano, Joseph C.
2009-01-01
Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infant's native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model…
Arishima, Hidetaka; Tsunetoshi, Kenzo; Kodera, Toshiaki; Kitai, Ryuhei; Takeuchi, Hiroaki; Kikuta, Ken-ichiro
2013-01-01
The authors report two cases of cervicomedullary decompression of foramen magnum (FM) stenosis in children with achondroplasia using intraoperative computed tomography (iCT). A 14-month-old girl with myelopathy and retarded motor development, and a 10-year-old girl who had already undergone incomplete FM decompression was presented with myelopathy. Both patients underwent decompressive sub-occipitalcraniectomy and C1 laminectomy without duraplasty using iCT. It clearly showed the extent of FM decompression during surgery, which finally enabled sufficient decompression. After the operation, their myelopathy improved. We think that iCT can provide useful information and guidance for sufficient decompression for FM stenosis in children with achondroplasia. PMID:24140778
Arishima, Hidetaka; Tsunetoshi, Kenzo; Kodera, Toshiaki; Kitai, Ryuhei; Takeuchi, Hiroaki; Kikuta, Ken-Ichiro
2013-01-01
The authors report two cases of cervicomedullary decompression of foramen magnum (FM) stenosis in children with achondroplasia using intraoperative computed tomography (iCT). A 14-month-old girl with myelopathy and retarded motor development, and a 10-year-old girl who had already undergone incomplete FM decompression was presented with myelopathy. Both patients underwent decompressive sub-occipitalcraniectomy and C1 laminectomy without duraplasty using iCT. It clearly showed the extent of FM decompression during surgery, which finally enabled sufficient decompression. After the operation, their myelopathy improved. We think that iCT can provide useful information and guidance for sufficient decompression for FM stenosis in children with achondroplasia.
Back-side hydrogenation technique for defect passivation in silicon solar cells
Sopori, Bhushan L.
1994-01-01
A two-step back-side hydrogenation process includes the steps of first bombarding the back side of the silicon substrate with hydrogen ions with intensities and for a time sufficient to implant enough hydrogen atoms into the silicon substrate to potentially passivate substantially all of the defects and impurities in the silicon substrate, and then illuminating the silicon substrate with electromagnetic radiation to activate the implanted hydrogen, so that it can passivate the defects and impurities in the substrate. The illumination step also annihilates the hydrogen-induced defects. The illumination step is carried out according to a two-stage illumination schedule, the first or low-power stage of which subjects the substrate to electromagnetic radiation that has sufficient intensity to activate the implanted hydrogen, yet not drive the hydrogen from the substrate. The second or high-power illumination stage subjects the substrate to higher intensity electromagnetic radiation, which is sufficient to annihilate the hydrogen-induced defects and sinter/alloy the metal contacts.
Back-side hydrogenation technique for defect passivation in silicon solar cells
Sopori, B.L.
1994-04-19
A two-step back-side hydrogenation process includes the steps of first bombarding the back side of the silicon substrate with hydrogen ions with intensities and for a time sufficient to implant enough hydrogen atoms into the silicon substrate to potentially passivate substantially all of the defects and impurities in the silicon substrate, and then illuminating the silicon substrate with electromagnetic radiation to activate the implanted hydrogen, so that it can passivate the defects and impurities in the substrate. The illumination step also annihilates the hydrogen-induced defects. The illumination step is carried out according to a two-stage illumination schedule, the first or low-power stage of which subjects the substrate to electromagnetic radiation that has sufficient intensity to activate the implanted hydrogen, yet not drive the hydrogen from the substrate. The second or high-power illumination stage subjects the substrate to higher intensity electromagnetic radiation, which is sufficient to annihilate the hydrogen-induced defects and sinter/alloy the metal contacts. 3 figures.
GPS synchronized power system phase angle measurements
NASA Astrophysics Data System (ADS)
Wilson, Robert E.; Sterlina, Patrick S.
1994-09-01
This paper discusses the use of Global Positioning System (GPS) synchronized equipment for the measurement and analysis of key power system quantities. Two GPS synchronized phasor measurement units (PMU) were installed before testing. It was indicated that PMUs recorded the dynamic response of the power system phase angles when the northern California power grid was excited by the artificial short circuits. Power system planning engineers perform detailed computer generated simulations of the dynamic response of the power system to naturally occurring short circuits. The computer simulations use models of transmission lines, transformers, circuit breakers, and other high voltage components. This work will compare computer simulations of the same event with field measurement.
Mansourian, Robert; Mutch, David M; Antille, Nicolas; Aubert, Jerome; Fogel, Paul; Le Goff, Jean-Marc; Moulin, Julie; Petrov, Anton; Rytz, Andreas; Voegel, Johannes J; Roberts, Matthew-Alan
2004-11-01
Microarray technology has become a powerful research tool in many fields of study; however, the cost of microarrays often results in the use of a low number of replicates (k). Under circumstances where k is low, it becomes difficult to perform standard statistical tests to extract the most biologically significant experimental results. Other more advanced statistical tests have been developed; however, their use and interpretation often remain difficult to implement in routine biological research. The present work outlines a method that achieves sufficient statistical power for selecting differentially expressed genes under conditions of low k, while remaining as an intuitive and computationally efficient procedure. The present study describes a Global Error Assessment (GEA) methodology to select differentially expressed genes in microarray datasets, and was developed using an in vitro experiment that compared control and interferon-gamma treated skin cells. In this experiment, up to nine replicates were used to confidently estimate error, thereby enabling methods of different statistical power to be compared. Gene expression results of a similar absolute expression are binned, so as to enable a highly accurate local estimate of the mean squared error within conditions. The model then relates variability of gene expression in each bin to absolute expression levels and uses this in a test derived from the classical ANOVA. The GEA selection method is compared with both the classical and permutational ANOVA tests, and demonstrates an increased stability, robustness and confidence in gene selection. A subset of the selected genes were validated by real-time reverse transcription-polymerase chain reaction (RT-PCR). All these results suggest that GEA methodology is (i) suitable for selection of differentially expressed genes in microarray data, (ii) intuitive and computationally efficient and (iii) especially advantageous under conditions of low k. The GEA code for R software is freely available upon request to authors.
Laser beamed power - Satellite demonstration applications
NASA Technical Reports Server (NTRS)
Landis, Geoffrey A.; Westerlund, Larry H.
1992-01-01
Feasibility of using a ground-based laser to beam light to the solar arrays of orbiting satellites to a level sufficient to provide the operating power required is discussed. An example case of a GEO communications satellite near the end of life due to radiation damage of the solar arrays or battery failure is considered. It is concluded that the commercial satellite industry should be able to reap significant economic benefits through the use of power beaming which is capable of providing supplemental power for satellites with failing arrays, or primary power for failed batteries.
Development of an hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1993-01-01
The purpose of this research effort was to begin the study of the application of hp-version finite elements to the numerical solution of optimal control problems. Under NAG-939, the hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element. One possible drawback is the increased computational effort within each element required in implementing hp-version finite elements. We are trying to determine whether this computational effort is sufficiently offset by the reduction in the number of time elements used and improved Newton-Raphson convergence so as to be useful in solving optimal control problems in real time. Because certain of the element interior unknowns can be eliminated at the element level by solving a small set of nonlinear algebraic equations in which the nodal values are taken as given, the scheme may turn out to be especially powerful in a parallel computing environment. A different processor could be assigned to each element. The number of processors, strictly speaking, is not required to be any larger than the number of sub-regions which are free of discontinuities of any kind.
Reconfigurable Computing for Computational Science: A New Focus in High Performance Computing
2006-11-01
in the past decade. Researchers are regularly employing the power of large computing systems and parallel processing to tackle larger and more...complex problems in all of the physical sciences. For the past decade or so, most of this growth in computing power has been “free” with increased...the scientific computing community as a means to continued growth in computing capability. This paper offers a glimpse of the hardware and
Challenges for future space power systems
NASA Technical Reports Server (NTRS)
Brandhorst, Henry W., Jr.
1989-01-01
Forecasts of space power needs are presented. The needs fall into three broad categories: survival, self-sufficiency, and industrialization. The cost of delivering payloads to orbital locations and from Low Earth Orbit (LEO) to Mars are determined. Future launch cost reductions are predicted. From these projections the performances necessary for future solar and nuclear space power options are identified. The availability of plentiful cost effective electric power and of low cost access to space are identified as crucial factors in the future extension of human presence in space.
Computer Power. Part 2: Electrical Power Problems and Their Amelioration.
ERIC Educational Resources Information Center
Price, Bennett J.
1989-01-01
Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…
Flame-Vortex Studies to Quantify Markstein Numbers Needed to Model Flame Extinction Limits
NASA Technical Reports Server (NTRS)
Driscoll, James F.; Feikema, Douglas A.
2003-01-01
This has quantified a database of Markstein numbers for unsteady flames; future work will quantify a database of flame extinction limits for unsteady conditions. Unsteady extinction limits have not been documented previously; both a stretch rate and a residence time must be measured, since extinction requires that the stretch rate be sufficiently large for a sufficiently long residence time. Ma was measured for an inwardly-propagating flame (IPF) that is negatively-stretched under microgravity conditions. Computations also were performed using RUN-1DL to explain the measurements. The Markstein number of an inwardly-propagating flame, for both the microgravity experiment and the computations, is significantly larger than that of an outwardy-propagating flame. The computed profiles of the various species within the flame suggest reasons. Computed hydrogen concentrations build up ahead of the IPF but not the OPF. Understanding was gained by running the computations for both simplified and full-chemistry conditions. Numerical Simulations. To explain the experimental findings, numerical simulations of both inwardly and outwardly propagating spherical flames (with complex chemistry) were generated using the RUN-1DL code, which includes 16 species and 46 reactions.
Tools and techniques for computational reproducibility.
Piccolo, Stephen R; Frampton, Michael B
2016-07-11
When reporting research findings, scientists document the steps they followed so that others can verify and build upon the research. When those steps have been described in sufficient detail that others can retrace the steps and obtain similar results, the research is said to be reproducible. Computers play a vital role in many research disciplines and present both opportunities and challenges for reproducibility. Computers can be programmed to execute analysis tasks, and those programs can be repeated and shared with others. The deterministic nature of most computer programs means that the same analysis tasks, applied to the same data, will often produce the same outputs. However, in practice, computational findings often cannot be reproduced because of complexities in how software is packaged, installed, and executed-and because of limitations associated with how scientists document analysis steps. Many tools and techniques are available to help overcome these challenges; here we describe seven such strategies. With a broad scientific audience in mind, we describe the strengths and limitations of each approach, as well as the circumstances under which each might be applied. No single strategy is sufficient for every scenario; thus we emphasize that it is often useful to combine approaches.
Recent progress of quantum annealing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, Sei
2015-03-10
We review the recent progress of quantum annealing. Quantum annealing was proposed as a method to solve generic optimization problems. Recently a Canadian company has drawn a great deal of attention, as it has commercialized a quantum computer based on quantum annealing. Although the performance of quantum annealing is not sufficiently understood, it is likely that quantum annealing will be a practical method both on a conventional computer and on a quantum computer.
Computational Power of Symmetry-Protected Topological Phases.
Stephen, David T; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert
2017-07-07
We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.
Computational Power of Symmetry-Protected Topological Phases
NASA Astrophysics Data System (ADS)
Stephen, David T.; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert
2017-07-01
We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.
Emulating a million machines to investigate botnets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudish, Donald W.
2010-06-01
Researchers at Sandia National Laboratories in Livermore, California are creating what is in effect a vast digital petridish able to hold one million operating systems at once in an effort to study the behavior of rogue programs known as botnets. Botnets are used extensively by malicious computer hackers to steal computing power fron Internet-connected computers. The hackers harness the stolen resources into a scattered but powerful computer that can be used to send spam, execute phishing, scams or steal digital information. These remote-controlled 'distributed computers' are difficult to observe and track. Botnets may take over parts of tens of thousandsmore » or in some cases even millions of computers, making them among the world's most powerful computers for some applications.« less
NASA Astrophysics Data System (ADS)
Shi, Wenhui; Feng, Changyou; Qu, Jixian; Zha, Hao; Ke, Dan
2018-02-01
Most of the existing studies on wind power output focus on the fluctuation of wind farms and the spatial self-complementary of wind power output time series was ignored. Therefore the existing probability models can’t reflect the features of power system incorporating wind farms. This paper analyzed the spatial self-complementary of wind power and proposed a probability model which can reflect temporal characteristics of wind power on seasonal and diurnal timescales based on sufficient measured data and improved clustering method. This model could provide important reference for power system simulation incorporating wind farms.
Power Efficient Hardware Architecture of SHA-1 Algorithm for Trusted Mobile Computing
NASA Astrophysics Data System (ADS)
Kim, Mooseop; Ryou, Jaecheol
The Trusted Mobile Platform (TMP) is developed and promoted by the Trusted Computing Group (TCG), which is an industry standard body to enhance the security of the mobile computing environment. The built-in SHA-1 engine in TMP is one of the most important circuit blocks and contributes the performance of the whole platform because it is used as key primitives supporting platform integrity and command authentication. Mobile platforms have very stringent limitations with respect to available power, physical circuit area, and cost. Therefore special architecture and design methods for low power SHA-1 circuit are required. In this paper, we present a novel and efficient hardware architecture of low power SHA-1 design for TMP. Our low power SHA-1 hardware can compute 512-bit data block using less than 7,000 gates and has a power consumption about 1.1 mA on a 0.25μm CMOS process.
NASA Technical Reports Server (NTRS)
Mckee, James W.
1990-01-01
This volume (2 of 4) contains the specification, structured flow charts, and code listing for the protocol. The purpose of an autonomous power system on a spacecraft is to relieve humans from having to continuously monitor and control the generation, storage, and distribution of power in the craft. This implies that algorithms will have been developed to monitor and control the power system. The power system will contain computers on which the algorithms run. There should be one control computer system that makes the high level decisions and sends commands to and receive data from the other distributed computers. This will require a communications network and an efficient protocol by which the computers will communicate. One of the major requirements on the protocol is that it be real time because of the need to control the power elements.
Changing computing paradigms towards power efficiency
Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro
2014-01-01
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033
46 CFR 38.20-10 - Ventilation-T/ALL.
Code of Federal Regulations, 2010 CFR
2010-10-01
... equipped with power ventilation of the exhaust type having capacity sufficient to effect a complete change of air in not more than 3 minutes equal to the volume of the compartment and associated trunks. (b) The power ventilation units shall not produce a source of vapor ignition in either the compartment or...
78 FR 42323 - Pilot Certification and Qualification Requirements for Air Carrier Operations
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-15
... sufficient. \\4\\ In addition, military PIC time (up to 500 hours) in a multiengine turbine-powered, fixed-wing... aerodynamic stall (insufficient airflow over the wings). The flightcrew's response to the stall warning system.... Military PIC time in a multiengine turbine-powered, fixed-wing airplane in an operation requiring more than...
10 CFR 431.325 - Units to be tested.
Code of Federal Regulations, 2011 CFR
2011-01-01
... EQUIPMENT Metal Halide Lamp Ballasts and Fixtures Test Procedures § 431.325 Units to be tested. For each basic model of metal halide lamp ballast selected for testing, a sample of sufficient size, no less than... energy efficiency calculated as the measured output power to the lamp divided by the measured input power...
Power in urban social-ecological systems: Processes and practices of governance and marginalization
Lindsay K. Campbell; Nate Gabriel
2016-01-01
Historically, the urban forestry literature, including the workfeatured in Urban Forestry and Urban Greening, has focused primarily on either quantitative, positivistic analyses of human-environment dynamics, or applied research to inform the management of natural resources, without sufficiently problematizing the effects of power within these processes (Bentsen et al...
USDA-ARS?s Scientific Manuscript database
The comprehensive identification of genes underlying phenotypic variation of complex traits such as disease resistance remains one of the greatest challenges in biology despite having genome sequences and more powerful tools. Most genome-wide screens lack sufficient resolving power as they typically...
Improved computer programs for calculating potential flow in propulsion system inlets
NASA Technical Reports Server (NTRS)
Stockman, N. O.; Farrell, C. A., Jr.
1977-01-01
Computer programs to calculate the incompressible potential flow corrected for compressibility in axisymmetric inlets at arbitrary operating conditions are presented. Included are a statement of the problem to be solved, a description of each of the programs and sufficient documentation, including a test case, to enable a user to run the programs.
ERIC Educational Resources Information Center
Rich, Sara E. House; Duhon, Gary J.; Reynolds, James
2017-01-01
Computers have become an important piece of technology in classrooms for implementing academic interventions. Often, students' responses to these interventions are used to help make important educational decisions. Therefore, it is important to consider the effect of these interventions across multiple contexts. For example, previous research has…
DOT National Transportation Integrated Search
1995-01-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety cortical functions in high-speed rail or magnetic levitation ...
DOT National Transportation Integrated Search
1995-09-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety critical functions in high-speed rail or magnetic levitation ...
The Time-Sharing Computer In Introductory Earth Science.
ERIC Educational Resources Information Center
MacDonald, William D.; MacDonald, Geraldine E.
Time-sharing computer-assisted instructional (CAI) programs employing the APL language are being used in support of introductory earth science laboratory exercises at the State University of New York at Binghamton. Three examples are sufficient to illustrate the variety of applications to which these programs are put. The BRACH program is used in…
ERIC Educational Resources Information Center
Schwan, Stephan; Straub, Daniela; Hesse, Friedrich W.
2002-01-01
Describes a study of computer conferencing where learners interacted over the course of four log-in sessions to acquire the knowledge sufficient to pass a learning test. Studied the number of messages irrelevant to the topic, explicit threading of messages, reading times of relevant messages, and learning outcomes. (LRW)
Using Computers in Distance Study: Results of a Survey amongst Disabled Distance Students.
ERIC Educational Resources Information Center
Ommerborn, Rainer; Schuemer, Rudolf
2002-01-01
In the euphoria about new technologies in distance education there exists the danger of not sufficiently considering how ever increasing "virtualization" may exclude some student groups. An explorative study was conducted that asked disabled students about their experiences with using computers and the Internet. Overall, those questioned…
Teach Graphic Design Basics with PowerPoint
ERIC Educational Resources Information Center
Lazaros, Edward J.; Spotts, Thomas H.
2007-01-01
While PowerPoint is generally regarded as simply software for creating slide presentations, it includes often overlooked--but powerful--drawing tools. Because it is part of the Microsoft Office package, PowerPoint comes preloaded on many computers and thus is already available in many classrooms. Since most computers are not preloaded with good…
NASA Technical Reports Server (NTRS)
Fegley, K. A.; Hayden, J. H.; Rehmann, D. W.
1974-01-01
The feasibility of formulating a methodology for the modeling and analysis of aerospace electrical power processing systems is investigated. It is shown that a digital computer may be used in an interactive mode for the design, modeling, analysis, and comparison of power processing systems.
Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz
2015-03-01
FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.
Physics Leads to Free Elections in the Nuclear Age.
NASA Astrophysics Data System (ADS)
Synek, Miroslav
2001-10-01
------------- Complex historical development on our planet, utilizing the knowledge of physics, has reached a powerful technology of nuclear intercontinental missiles, conceivably controllable through a computerized "push-button". Whenever this technology falls under the control of an irresponsible, miscalculating, or, insane dictator, with sufficiently powerful means of a huge, mass-produced, nuclear-missile built-up, anywhere on our planet, the very survival of all humanity on our planet could be threatened. Therefore, it is a historical urgency that this technology be under the control by a government of the people, by the people and for the people, based on a sufficiently reliable system of free elections, in any country on our planet, wherever and whenever a total nuclear holocaust could originate.
Performance of an 8 kW Hall Thruster
2000-01-12
For the purpose of either orbit raising and/or repositioning the Hall thruster must be capable of delivering sufficient thrust to minimize transfer...time. This coupled with the increasing on-board electric power capacity of military and commercial satellites, requires a high power Hall thruster that...development of a novel, high power Hall thruster , capable of efficient operation over a broad range of Isp and thrust. We call such a thruster the bi
Solid-state resistor for pulsed power machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoltzfus, Brian; Savage, Mark E.; Hutsel, Brian Thomas
2016-12-06
A flexible solid-state resistor comprises a string of ceramic resistors that can be used to charge the capacitors of a linear transformer driver (LTD) used in a pulsed power machine. The solid-state resistor is able to absorb the energy of a switch prefire, thereby limiting LTD cavity damage, yet has a sufficiently low RC charge time to allow the capacitor to be recharged without disrupting the operation of the pulsed power machine.
Translations from the Soviet Journal of Atomic Energy
1962-02-15
constructing a new communist society. Atomic energy, i4n its role of a new and powerful source of highly con •entrated energy, can effect a con- siderable...problem have provided sufficient evidence of the perni- cious effects of radioactive contamination on humanbeings and require the development of special...be necessary to effect a considerable decrease in the cost of electrical power pro- duced at atomic electric power, stations. One of the most
Heterotic computing: exploiting hybrid computational devices.
Kendon, Viv; Sebald, Angelika; Stepney, Susan
2015-07-28
Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.
1980-01-01
Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.
Expression Templates for Truncated Power Series
NASA Astrophysics Data System (ADS)
Cary, John R.; Shasharina, Svetlana G.
1997-05-01
Truncated power series are used extensively in accelerator transport modeling for rapid tracking and analysis of nonlinearity. Such mathematical objects are naturally represented computationally as objects in C++. This is more intuitive and produces more transparent code through operator overloading. However, C++ object use often comes with a computational speed loss due, e.g., to the creation of temporaries. We have developed a subset of truncated power series expression templates(http://monet.uwaterloo.ca/blitz/). Such expression templates use the powerful template processing facility of C++ to combine complicated expressions into series operations that exectute more rapidly. We compare computational speeds with existing truncated power series libraries.
Systems and methods for rapid processing and storage of data
Stalzer, Mark A.
2017-01-24
Systems and methods of building massively parallel computing systems using low power computing complexes in accordance with embodiments of the invention are disclosed. A massively parallel computing system in accordance with one embodiment of the invention includes at least one Solid State Blade configured to communicate via a high performance network fabric. In addition, each Solid State Blade includes a processor configured to communicate with a plurality of low power computing complexes interconnected by a router, and each low power computing complex includes at least one general processing core, an accelerator, an I/O interface, and cache memory and is configured to communicate with non-volatile solid state memory.
Quantum Dynamics of Helium Clusters
1993-03-01
the structure of both these and the HeN clusters in the body fixed frame by computing principal moments of inertia, thereby avoiding the...8217 of helium clusters, with the modification that we subtract 0.96 K from the computed values so that lor sufficiently large clusters we recover the...phonon spectrum of liquid He. To get a picture of these spectra one needs to compute the structure functions 51. Monte Carlo random walk simulations
A dc model for power switching transistors suitable for computer-aided design and analysis
NASA Technical Reports Server (NTRS)
Wilson, P. M.; George, R. T., Jr.; Owen, H. A., Jr.; Wilson, T. G.
1979-01-01
The proposed dc model for bipolar junction power switching transistors is based on measurements which may be made with standard laboratory equipment. Those nonlinearities which are of importance to power electronics design are emphasized. Measurements procedures are discussed in detail. A model formulation adapted for use with a computer program is presented, and a comparison between actual and computer-generated results is made.
Changing computing paradigms towards power efficiency.
Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro
2014-06-28
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Microwave Treatment for Cardiac Arrhythmias
NASA Technical Reports Server (NTRS)
Hernandez-Moya, Sonia
2009-01-01
NASA seeks to transfer the NASA developed microwave ablation technology, designed for the treatment of ventricular tachycardia (irregular heart beat), to industry. After a heart attack, many cells surrounding the resulting scar continue to live but are abnormal electrically; they may conduct impulses unusually slowly or fire when they would typically be silent. These diseased areas might disturb smooth signaling by forming a reentrant circuit in the muscle. The objective of microwave ablation is to heat and kill these diseased cells to restore appropriate electrical activity in the heart. This technology is a method and apparatus that provides for propagating microwave energy into heart tissues to produce a desired temperature profile therein at tissue depths sufficient for thermally ablating arrhythmogenic cardiac tissue while preventing excessive heating of surrounding tissues, organs, and blood. A wide bandwidth double-disk antenna is effective for this purpose over a bandwidth of about six gigahertz. A computer simulation provides initial screening capabilities for an antenna such as antenna, frequency, power level, and power application duration. The simulation also allows optimization of techniques for specific patients or conditions. In comparison with other methods that involve direct-current pulses or radio frequencies below 1 GHz, this method may prove more effective in treating ventricular tachycardia. This is because the present method provides for greater control of the location, cross-sectional area, and depth of a lesion via selection of the location and design of the antenna and the choice of microwave power and frequency.
NASA Astrophysics Data System (ADS)
Hoekstra, Robert J.; Kushner, Mark J.
1996-03-01
Inductively coupled plasma (ICP) reactors are being developed for low gas pressure (<10s mTorr) and high plasma density ([e]≳1011 cm-3) microelectronics fabrication. In these reactors, the plasma is generated by the inductively coupled electric field while an additional radio frequency (rf) bias is applied to the substrate. One of the goals of these systems is to independently control the magnitude of the ion flux by the inductively coupled power deposition, and the acceleration of ions into the substrate by the rf bias. In high plasma density reactors the width of the sheath above the wafer may be sufficiently thin that ions are able to traverse it in approximately 1 rf cycle, even at 13.56 MHz. As a consequence, the ion energy distribution (IED) may have a shape typically associated with lower frequency operation in conventional reactive ion etching tools. In this paper, we present results from a computer model for the IED incident on the wafer in ICP etching reactors. We find that in the parameter space of interest, the shape of the IED depends both on the amplitude of the rf bias and on the ICP power. The former quantity determines the average energy of the IED. The latter quantity controls the width of the sheath, the transit time of ions across the sheath and hence the width of the IED. In general, high ICP powers (thinner sheaths) produce wider IEDs.
NASA Astrophysics Data System (ADS)
Leow, Alex D.; Zhu, Siwei
2008-03-01
Diffusion weighted MR imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitizing gradients along a minimum of 6 directions, second-order tensors (represetnted by 3-by-3 positive definiite matrices) can be computed to model dominant diffusion processes. However, it has been shown that conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g. crossing fiber tracts. More recently, High Angular Resolution Diffusion Imaging (HARDI) seeks to address this issue by employing more than 6 gradient directions. To account for fiber crossing when analyzing HARDI data, several methodologies have been introduced. For example, q-ball imaging was proposed to approximate Orientation Diffusion Function (ODF). Similarly, the PAS method seeks to reslove the angular structure of displacement probability functions using the maximum entropy principle. Alternatively, deconvolution methods extract multiple fiber tracts by computing fiber orientations using a pre-specified single fiber response function. In this study, we introduce Tensor Distribution Function (TDF), a probability function defined on the space of symmetric and positive definite matrices. Using calculus of variations, we solve for the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, ODF can easily be computed by analytical integration of the resulting displacement probability function. Moreover, principle fiber directions can also be directly derived from the TDF.
Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment
ERIC Educational Resources Information Center
Lin, Jing-Wen
2016-01-01
This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using…
47 CFR 43.51 - Contracts and concessions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... market power on the foreign end of one or more of the U.S.-international routes included in the contract... International Bureau's World Wide Web site at http://www.fcc.gov/ib. The Commission will include on the list of... markets on the foreign end of the route or that it nevertheless lacks sufficient market power on the...
47 CFR 43.51 - Contracts and concessions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... market power on the foreign end of one or more of the U.S.-international routes included in the contract... International Bureau's World Wide Web site at http://www.fcc.gov/ib. The Commission will include on the list of... markets on the foreign end of the route or that it nevertheless lacks sufficient market power on the...
47 CFR 43.51 - Contracts and concessions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... market power on the foreign end of one or more of the U.S.-international routes included in the contract... International Bureau's World Wide Web site at http://www.fcc.gov/ib. The Commission will include on the list of... markets on the foreign end of the route or that it nevertheless lacks sufficient market power on the...
USDA-ARS?s Scientific Manuscript database
The identification of specific genes underlying phenotypic variation of complex traits remains one of the greatest challenges in biology despite having genome sequences and more powerful tools. Most genome-wide screens lack sufficient resolving power as they typically depend on linkage. One altern...
The impact of technological change on census taking.
Brackstone, G J
1984-01-01
The increasing costs of traditional census collection methods have forced census administrators to look at the possibility of using administrative record systems in order to obtain population data. This article looks at the recent technological developments which have taken place in the last decade, and how they may affect data collection for the 1990 census. Because it is important to allow sufficient developmental and testing time of potential automated methods and technologies, it is not too soon to look at the trends resulting from technological advances and their implications for census data collection. These trends are: 1) the declining ratio of computing costs to manpower costs; 2) the increasing ratio of power and capacity of computers to their physical size; 3) declining data storage costs; 4) the increasing public acceptance of computers; 5) the increasing workforce familiarity with computers; and 6) the growing interactive computing capacity. Traditional use of computers for government data gathering operations were primarily for the processing stage. Now the possibility of applying these trends to census material may influence all aspects of the process; from questionnaire design and production, to data analysis. Examples include: the production of high quality maps for geographic frameworks, optical readers for data entry, the ability to provide users with a final data base, as well as printed output, and quicker dissemination of data results. Although these options exist, just like the use of administrative records for statistical purposes, they must be carefully analysed in context to the purposes for which they were created. The limitations of using administrative records for the and 2) definition, coverage, and quality limitations could bias statistical data derived from them. Perhaps they should be used as potential complementary sources of data, and not as replacements for census data. Influencing the evolution of these administrative records will help increase their chances fo being used for future census information.
Multifunctional Inflatable Structure Being Developed for the PowerSphere Concept
NASA Technical Reports Server (NTRS)
Peterson, Todd T.
2003-01-01
The continuing development of microsatellites and nanosatellites for low Earth orbits requires the collection of sufficient power for instruments onboard a low-weight, low-volume spacecraft. Because the overall surface area of a microsatellite or nanosatellite is small, body-mounted solar cells cannot provide enough power. The deployment of traditional, rigid, solar arrays necessitates larger satellite volumes and weights, and also requires extra apparatus for pointing. One solution to this power choke problem is the deployment of a spherical, inflatable power system. This power system, termed the "PowerSphere," has several advantages, including a high collection area, low weight and stowage volume, and the elimination of solar array pointing mechanisms.
Electricity market design for generator revenue sufficiency with increased variable generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levin, Todd; Botterud, Audun
Here, we present a computationally efficient mixed-integer program (MIP) that determines optimal generator expansion decisions, and hourly unit commitment and dispatch in a power system. The impact of increasing wind power capacity on the optimal generation mix and generator profitability is analyzed for a test case that approximates the electricity market in Texas (ERCOT). We analyze three market policies that may support resource adequacy: Operating Reserve Demand Curves (ORDC), Fixed Reserve Scarcity Prices (FRSP) and fixed capacity payments (CP). Optimal expansion plans are comparable between the ORDC and FRSP implementations, while capacity payments may result in additional new capacity. Themore » FRSP policy leads to frequent reserves scarcity events and corresponding price spikes, while the ORDC implementation results in more continuous energy prices. Average energy prices decrease with increasing wind penetration under all policies, as do revenues for baseload and wind generators. Intermediate and peak load plants benefit from higher reserve prices and are less exposed to reduced energy prices. All else equal, an ORDC approach may be preferred to FRSP as it results in similar expansion and revenues with less extreme energy prices. A fixed CP leads to additional new flexible NGCT units, but lower profits for other technologies.« less
Kim, Ki-Wook; Han, Youn-Hee; Min, Sung-Gi
2017-09-21
Many Internet of Things (IoT) services utilize an IoT access network to connect small devices with remote servers. They can share an access network with standard communication technology, such as IEEE 802.11ah. However, an authentication and key management (AKM) mechanism for resource constrained IoT devices using IEEE 802.11ah has not been proposed as yet. We therefore propose a new AKM mechanism for an IoT access network, which is based on IEEE 802.11 key management with the IEEE 802.1X authentication mechanism. The proposed AKM mechanism does not require any pre-configured security information between the access network domain and the IoT service domain. It considers the resource constraints of IoT devices, allowing IoT devices to delegate the burden of AKM processes to a powerful agent. The agent has sufficient power to support various authentication methods for the access point, and it performs cryptographic functions for the IoT devices. Performance analysis shows that the proposed mechanism greatly reduces computation costs, network costs, and memory usage of the resource-constrained IoT device as compared to the existing IEEE 802.11 Key Management with the IEEE 802.1X authentication mechanism.
Han, Youn-Hee; Min, Sung-Gi
2017-01-01
Many Internet of Things (IoT) services utilize an IoT access network to connect small devices with remote servers. They can share an access network with standard communication technology, such as IEEE 802.11ah. However, an authentication and key management (AKM) mechanism for resource constrained IoT devices using IEEE 802.11ah has not been proposed as yet. We therefore propose a new AKM mechanism for an IoT access network, which is based on IEEE 802.11 key management with the IEEE 802.1X authentication mechanism. The proposed AKM mechanism does not require any pre-configured security information between the access network domain and the IoT service domain. It considers the resource constraints of IoT devices, allowing IoT devices to delegate the burden of AKM processes to a powerful agent. The agent has sufficient power to support various authentication methods for the access point, and it performs cryptographic functions for the IoT devices. Performance analysis shows that the proposed mechanism greatly reduces computation costs, network costs, and memory usage of the resource-constrained IoT device as compared to the existing IEEE 802.11 Key Management with the IEEE 802.1X authentication mechanism. PMID:28934152
Electricity market design for generator revenue sufficiency with increased variable generation
Levin, Todd; Botterud, Audun
2015-10-01
Here, we present a computationally efficient mixed-integer program (MIP) that determines optimal generator expansion decisions, and hourly unit commitment and dispatch in a power system. The impact of increasing wind power capacity on the optimal generation mix and generator profitability is analyzed for a test case that approximates the electricity market in Texas (ERCOT). We analyze three market policies that may support resource adequacy: Operating Reserve Demand Curves (ORDC), Fixed Reserve Scarcity Prices (FRSP) and fixed capacity payments (CP). Optimal expansion plans are comparable between the ORDC and FRSP implementations, while capacity payments may result in additional new capacity. Themore » FRSP policy leads to frequent reserves scarcity events and corresponding price spikes, while the ORDC implementation results in more continuous energy prices. Average energy prices decrease with increasing wind penetration under all policies, as do revenues for baseload and wind generators. Intermediate and peak load plants benefit from higher reserve prices and are less exposed to reduced energy prices. All else equal, an ORDC approach may be preferred to FRSP as it results in similar expansion and revenues with less extreme energy prices. A fixed CP leads to additional new flexible NGCT units, but lower profits for other technologies.« less
A simple modern correctness condition for a space-based high-performance multiprocessor
NASA Technical Reports Server (NTRS)
Probst, David K.; Li, Hon F.
1992-01-01
A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.
Optimizing space constellations for mobile satellite systems
NASA Technical Reports Server (NTRS)
Roussel, T.; Taisant, J.-P.
1993-01-01
Designing a mobile satellite system entails many complex trade-offs between a great number of parameters including: capacity, complexity of the payload, constellation geometry, number of satellites, quality of coverage, etc. This paper aims at defining a methodology which tries to split the variables to give rapidly some first results. The major input considered is the traffic assumption which would be offered by the system. A first key step is the choice of the best Rider or Walker constellation geometries - with different numbers of satellites - to insure a good quality of coverage over a selected service area. Another aspect to be addressed is the possible altitude location of the constellation, since it is limited by many constraints. The altitude ranges that seem appropriate considering the spatial environment, the launch and orbit keeping policy and the feasibility of the antenna allowing sufficient frequency reuse are briefly analyzed. To support these first considerations, some 'reference constellations' with similar coverage quality are chosen. The in-orbit capacity needed to support the assumed traffic is computed versus altitude. Finally, the exact number of satellite is determined. It comes as an optimum between a small number of satellites offering a high (and costly) power margin in bad propagation situation and a great number of less powerful satellites granting the same quality of service.
Optimizing space constellations for mobile satellite systems
NASA Astrophysics Data System (ADS)
Roussel, T.; Taisant, J.-P.
Designing a mobile satellite system entails many complex trade-offs between a great number of parameters including: capacity, complexity of the payload, constellation geometry, number of satellites, quality of coverage, etc. This paper aims at defining a methodology which tries to split the variables to give rapidly some first results. The major input considered is the traffic assumption which would be offered by the system. A first key step is the choice of the best Rider or Walker constellation geometries - with different numbers of satellites - to insure a good quality of coverage over a selected service area. Another aspect to be addressed is the possible altitude location of the constellation, since it is limited by many constraints. The altitude ranges that seem appropriate considering the spatial environment, the launch and orbit keeping policy and the feasibility of the antenna allowing sufficient frequency reuse are briefly analyzed. To support these first considerations, some 'reference constellations' with similar coverage quality are chosen. The in-orbit capacity needed to support the assumed traffic is computed versus altitude. Finally, the exact number of satellite is determined. It comes as an optimum between a small number of satellites offering a high (and costly) power margin in bad propagation situation and a great number of less powerful satellites granting the same quality of service.
A Rethink for Computing Education for Sustainability
ERIC Educational Resources Information Center
Mann, Samuel
2016-01-01
The premise of Computing Education for Sustainability (CEfS) is examined. CEfS is described as a leverage discipline, where the handprint is much larger than the footprint. The potential of this leverage is described and the development of the field explored. Unfortunately CEfS is found not to be making sufficient impact in terms of a contribution…
NASA Technical Reports Server (NTRS)
Hawk, J. D.; Stockman, N. O.; Farrell, C. A., Jr.
1978-01-01
Incompressible potential flow calculations are presented that were corrected for compressibility in two-dimensional inlets at arbitrary operating conditions. Included are a statement of the problem to be solved, a description of each of the computer programs, and sufficient documentation, including a test case, to enable a user to run the program.
Schellenberg, Florian; Oberhofer, Katja; Taylor, William R.
2015-01-01
Background. Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. Methods. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Results. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. Conclusion. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines. PMID:26417378
Till, Charlotte; Haverkamp, Jamie; White, Devin; ...
2016-11-22
Climate change has the potential to displace large populations in many parts of the developed and developing world. Understanding why, how, and when environmental migrants decide to move is critical to successful strategic planning within organizations tasked with helping the affected groups, and mitigating their systemic impacts. One way to support planning is through the employment of computational modeling techniques. Models can provide a window into possible futures, allowing planners and decision makers to test different scenarios in order to understand what might happen. While modeling is a powerful tool, it presents both opportunities and challenges. This paper builds amore » foundation for the broader community of model consumers and developers by: providing an overview of pertinent climate-induced migration research, describing some different types of models and how to select the most relevant one(s), highlighting three perspectives on obtaining data to use in said model(s), and the consequences associated with each. It concludes with two case studies based on recent research that illustrate what can happen when ambitious modeling efforts are undertaken without sufficient planning, oversight, and interdisciplinary collaboration. Lastly, we hope that the broader community can learn from our experiences and apply this knowledge to their own modeling research efforts.« less
Holographic three-dimensional telepresence using large-area photorefractive polymer.
Blanche, P-A; Bablumian, A; Voorakaranam, R; Christenson, C; Lin, W; Gu, T; Flores, D; Wang, P; Hsieh, W-Y; Kathaperumal, M; Rachwal, B; Siddiqui, O; Thomas, J; Norwood, R A; Yamamoto, M; Peyghambarian, N
2010-11-04
Holography is a technique that is used to display objects or scenes in three dimensions. Such three-dimensional (3D) images, or holograms, can be seen with the unassisted eye and are very similar to how humans see the actual environment surrounding them. The concept of 3D telepresence, a real-time dynamic hologram depicting a scene occurring in a different location, has attracted considerable public interest since it was depicted in the original Star Wars film in 1977. However, the lack of sufficient computational power to produce realistic computer-generated holograms and the absence of large-area and dynamically updatable holographic recording media have prevented realization of the concept. Here we use a holographic stereographic technique and a photorefractive polymer material as the recording medium to demonstrate a holographic display that can refresh images every two seconds. A 50 Hz nanosecond pulsed laser is used to write the holographic pixels. Multicoloured holographic 3D images are produced by using angular multiplexing, and the full parallax display employs spatial multiplexing. 3D telepresence is demonstrated by taking multiple images from one location and transmitting the information via Ethernet to another location where the hologram is printed with the quasi-real-time dynamic 3D display. Further improvements could bring applications in telemedicine, prototyping, advertising, updatable 3D maps and entertainment.
Image alignment for tomography reconstruction from synchrotron X-ray microscopic images.
Cheng, Chang-Chieh; Chien, Chia-Chi; Chen, Hsiang-Hsin; Hwu, Yeukuang; Ching, Yu-Tai
2014-01-01
A synchrotron X-ray microscope is a powerful imaging apparatus for taking high-resolution and high-contrast X-ray images of nanoscale objects. A sufficient number of X-ray projection images from different angles is required for constructing 3D volume images of an object. Because a synchrotron light source is immobile, a rotational object holder is required for tomography. At a resolution of 10 nm per pixel, the vibration of the holder caused by rotating the object cannot be disregarded if tomographic images are to be reconstructed accurately. This paper presents a computer method to compensate for the vibration of the rotational holder by aligning neighboring X-ray images. This alignment process involves two steps. The first step is to match the "projected feature points" in the sequence of images. The matched projected feature points in the x-θ plane should form a set of sine-shaped loci. The second step is to fit the loci to a set of sine waves to compute the parameters required for alignment. The experimental results show that the proposed method outperforms two previously proposed methods, Xradia and SPIDER. The developed software system can be downloaded from the URL, http://www.cs.nctu.edu.tw/~chengchc/SCTA or http://goo.gl/s4AMx.
Integrating Sensory/Actuation Systems in Agricultural Vehicles
Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo
2014-01-01
In recent years, there have been major advances in the development of new and more powerful perception systems for agriculture, such as computer-vision and global positioning systems. Due to these advances, the automation of agricultural tasks has received an important stimulus, especially in the area of selective weed control where high precision is essential for the proper use of resources and the implementation of more efficient treatments. Such autonomous agricultural systems incorporate and integrate perception systems for acquiring information from the environment, decision-making systems for interpreting and analyzing such information, and actuation systems that are responsible for performing the agricultural operations. These systems consist of different sensors, actuators, and computers that work synchronously in a specific architecture for the intended purpose. The main contribution of this paper is the selection, arrangement, integration, and synchronization of these systems to form a whole autonomous vehicle for agricultural applications. This type of vehicle has attracted growing interest, not only for researchers but also for manufacturers and farmers. The experimental results demonstrate the success and performance of the integrated system in guidance and weed control tasks in a maize field, indicating its utility and efficiency. The whole system is sufficiently flexible for use in other agricultural tasks with little effort and is another important contribution in the field of autonomous agricultural vehicles. PMID:24577525
DOE Office of Scientific and Technical Information (OSTI.GOV)
Till, Charlotte; Haverkamp, Jamie; White, Devin
Climate change has the potential to displace large populations in many parts of the developed and developing world. Understanding why, how, and when environmental migrants decide to move is critical to successful strategic planning within organizations tasked with helping the affected groups, and mitigating their systemic impacts. One way to support planning is through the employment of computational modeling techniques. Models can provide a window into possible futures, allowing planners and decision makers to test different scenarios in order to understand what might happen. While modeling is a powerful tool, it presents both opportunities and challenges. This paper builds amore » foundation for the broader community of model consumers and developers by: providing an overview of pertinent climate-induced migration research, describing some different types of models and how to select the most relevant one(s), highlighting three perspectives on obtaining data to use in said model(s), and the consequences associated with each. It concludes with two case studies based on recent research that illustrate what can happen when ambitious modeling efforts are undertaken without sufficient planning, oversight, and interdisciplinary collaboration. Lastly, we hope that the broader community can learn from our experiences and apply this knowledge to their own modeling research efforts.« less
Integrating sensory/actuation systems in agricultural vehicles.
Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo
2014-02-26
In recent years, there have been major advances in the development of new and more powerful perception systems for agriculture, such as computer-vision and global positioning systems. Due to these advances, the automation of agricultural tasks has received an important stimulus, especially in the area of selective weed control where high precision is essential for the proper use of resources and the implementation of more efficient treatments. Such autonomous agricultural systems incorporate and integrate perception systems for acquiring information from the environment, decision-making systems for interpreting and analyzing such information, and actuation systems that are responsible for performing the agricultural operations. These systems consist of different sensors, actuators, and computers that work synchronously in a specific architecture for the intended purpose. The main contribution of this paper is the selection, arrangement, integration, and synchronization of these systems to form a whole autonomous vehicle for agricultural applications. This type of vehicle has attracted growing interest, not only for researchers but also for manufacturers and farmers. The experimental results demonstrate the success and performance of the integrated system in guidance and weed control tasks in a maize field, indicating its utility and efficiency. The whole system is sufficiently flexible for use in other agricultural tasks with little effort and is another important contribution in the field of autonomous agricultural vehicles.
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.
Anderson, Donald D; Segal, Neil A; Kern, Andrew M; Nevitt, Michael C; Torner, James C; Lynch, John A
2012-01-01
Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands) need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater) for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs). The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93-0.99) and good inter-rater reliability (0.84-0.97). This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.
Schellenberg, Florian; Oberhofer, Katja; Taylor, William R; Lorenzetti, Silvio
2015-01-01
Knowledge of the musculoskeletal loading conditions during strength training is essential for performance monitoring, injury prevention, rehabilitation, and training design. However, measuring muscle forces during exercise performance as a primary determinant of training efficacy and safety has remained challenging. In this paper we review existing computational techniques to determine muscle forces in the lower limbs during strength exercises in vivo and discuss their potential for uptake into sports training and rehabilitation. Muscle forces during exercise performance have almost exclusively been analysed using so-called forward dynamics simulations, inverse dynamics techniques, or alternative methods. Musculoskeletal models based on forward dynamics analyses have led to considerable new insights into muscular coordination, strength, and power during dynamic ballistic movement activities, resulting in, for example, improved techniques for optimal performance of the squat jump, while quasi-static inverse dynamics optimisation and EMG-driven modelling have helped to provide an understanding of low-speed exercises. The present review introduces the different computational techniques and outlines their advantages and disadvantages for the informed usage by nonexperts. With sufficient validation and widespread application, muscle force calculations during strength exercises in vivo are expected to provide biomechanically based evidence for clinicians and therapists to evaluate and improve training guidelines.
Real-time image processing for passive mmW imagery
NASA Astrophysics Data System (ADS)
Kozacik, Stephen; Paolini, Aaron; Bonnett, James; Harrity, Charles; Mackrides, Daniel; Dillon, Thomas E.; Martin, Richard D.; Schuetz, Christopher A.; Kelmelis, Eric; Prather, Dennis W.
2015-05-01
The transmission characteristics of millimeter waves (mmWs) make them suitable for many applications in defense and security, from airport preflight scanning to penetrating degraded visual environments such as brownout or heavy fog. While the cold sky provides sufficient illumination for these images to be taken passively in outdoor scenarios, this utility comes at a cost; the diffraction limit of the longer wavelengths involved leads to lower resolution imagery compared to the visible or IR regimes, and the low power levels inherent to passive imagery allow the data to be more easily degraded by noise. Recent techniques leveraging optical upconversion have shown significant promise, but are still subject to fundamental limits in resolution and signal-to-noise ratio. To address these issues we have applied techniques developed for visible and IR imagery to decrease noise and increase resolution in mmW imagery. We have developed these techniques into fieldable software, making use of GPU platforms for real-time operation of computationally complex image processing algorithms. We present data from a passive, 77 GHz, distributed aperture, video-rate imaging platform captured during field tests at full video rate. These videos demonstrate the increase in situational awareness that can be gained through applying computational techniques in real-time without needing changes in detection hardware.
FUSION WELDING METHOD AND APPARATUS
Wyman, W.L.; Steinkamp, W.I.
1961-01-17
An apparatus for the fusion welding of metal pieces at a joint is described. The apparatus comprises a highvacuum chamber enclosing the metal pieces and a thermionic filament emitter. Sufficient power is applied to the emitter so that when the electron emission therefrom is focused on the joint it has sufficient energy to melt the metal pieces, ionize the metallic vapor abcve the molten metal, and establish an arc discharge between the joint and the emitter.
Preliminary study of the use of the STAR-100 computer for transonic flow calculations
NASA Technical Reports Server (NTRS)
Keller, J. D.; Jameson, A.
1977-01-01
An explicit method for solving the transonic small-disturbance potential equation is presented. This algorithm, which is suitable for the new vector-processor computers such as the CDC STAR-100, is compared to successive line over-relaxation (SLOR) on a simple test problem. The convergence rate of the explicit scheme is slower than that of SLOR, however, the efficiency of the explicit scheme on the STAR-100 computer is sufficient to overcome the slower convergence rate and allow an overall speedup compared to SLOR on the CYBER 175 computer.
NASA Astrophysics Data System (ADS)
Stockton, Gregory R.
2011-05-01
Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.
2014-11-01
Kullback , S., & Leibler , R. (1951). On information and sufficiency. Annals of Mathematical Statistics, 22, 79...cognitive challenges of sensemaking only informally using conceptual notions like "framing" and "re-framing", which are not sufficient to support T&E in...appropriate frame(s) from memory. Assess the Frame: Evaluate the quality of fit between data and frame. Generate Hypotheses: Use the current
Computer memory power control for the Galileo spacecraft
NASA Technical Reports Server (NTRS)
Detwiler, R. C.
1983-01-01
The developmental history, major design drives, and final topology of the computer memory power system on the Galileo spacecraft are described. A unique method of generating memory backup power directly from the fault current drawn during a spacecraft power overload or fault condition allows this system to provide continuous memory power. This concept provides a unique solution to the problem of volatile memory loss without the use of a battery of other large energy storage elements usually associated with uninterrupted power supply designs.
Evolution of coalitionary killing.
Wrangham, R W
1999-01-01
Warfare has traditionally been considered unique to humans. It has, therefore, often been explained as deriving from features that are unique to humans, such as the possession of weapons or the adoption of a patriarchal ideology. Mounting evidence suggests, however, that coalitional killing of adults in neighboring groups also occurs regularly in other species, including wolves and chimpanzees. This implies that selection can favor components of intergroup aggression important to human warfare, including lethal raiding. Here I present the principal adaptive hypothesis for explaining the species distribution of intergroup coalitional killing. This is the "imbalance-of-power hypothesis," which suggests that coalitional killing is the expression of a drive for dominance over neighbors. Two conditions are proposed to be both necessary and sufficient to account for coalitional killing of neighbors: (1) a state of intergroup hostility; (2) sufficient imbalances of power between parties that one party can attack the other with impunity. Under these conditions, it is suggested, selection favors the tendency to hunt and kill rivals when the costs are sufficiently low. The imbalance-of-power hypothesis has been criticized on a variety of empirical and theoretical grounds which are discussed. To be further tested, studies of the proximate determinants of aggression are needed. However, current evidence supports the hypothesis that selection has favored a hunt-and-kill propensity in chimpanzees and humans, and that coalitional killing has a long history in the evolution of both species.
Revenue Sufficiency and Reliability in a Zero Marginal Cost Future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany A.
Features of existing wholesale electricity markets, such as administrative pricing rules and policy-based reliability standards, can distort market incentives from allowing generators sufficient opportunities to recover both fixed and variable costs. Moreover, these challenges can be amplified by other factors, including (1) inelastic demand resulting from a lack of price signal clarity, (2) low- or near-zero marginal cost generation, particularly arising from low natural gas fuel prices and variable generation (VG), such as wind and solar, and (3) the variability and uncertainty of this VG. As power systems begin to incorporate higher shares of VG, many questions arise about themore » suitability of the existing marginal-cost-based price formation, primarily within an energy-only market structure, to ensure the economic viability of resources that might be needed to provide system reliability. This article discusses these questions and provides a summary of completed and ongoing modelling-based work at the National Renewable Energy Laboratory to better understand the impacts of evolving power systems on reliability and revenue sufficiency.« less
Revenue Sufficiency and Reliability in a Zero Marginal Cost Future: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany A.; Milligan, Michael; Brinkman, Greg
Features of existing wholesale electricity markets, such as administrative pricing rules and policy-based reliability standards, can distort market incentives from allowing generators sufficient opportunities to recover both fixed and variable costs. Moreover, these challenges can be amplified by other factors, including (1) inelastic demand resulting from a lack of price signal clarity, (2) low- or near-zero marginal cost generation, particularly arising from low natural gas fuel prices and variable generation (VG), such as wind and solar, and (3) the variability and uncertainty of this VG. As power systems begin to incorporate higher shares of VG, many questions arise about themore » suitability of the existing marginal-cost-based price formation, primarily within an energy-only market structure, to ensure the economic viability of resources that might be needed to provide system reliability. This article discusses these questions and provides a summary of completed and ongoing modelling-based work at the National Renewable Energy Laboratory to better understand the impacts of evolving power systems on reliability and revenue sufficiency.« less
Anomalous Fluctuations in Autoregressive Models with Long-Term Memory
NASA Astrophysics Data System (ADS)
Sakaguchi, Hidetsugu; Honjo, Haruo
2015-10-01
An autoregressive model with a power-law type memory kernel is studied as a stochastic process that exhibits a self-affine-fractal-like behavior for a small time scale. We find numerically that the root-mean-square displacement Δ(m) for the time interval m increases with a power law as mα with α < 1/2 for small m but saturates at sufficiently large m. The exponent α changes with the power exponent of the memory kernel.
The Army Communications Objectives Measurement System (ACOMS): Survey Design
1988-04-01
monthly basis so that the annual sample includes sufficient Hispanics to detect at the .80 power level: (1) Year-to-year changes of 3% in item...Hispanics. The requirements are listed in terms of power level and must be translated into requisite sample sizes. The requirements are expressed as the...annual samples needed to detect certain differences at the 80% power level. Differences in both directions are to be examined, so that a two-tailed
Grid Computing in K-12 Schools. Soapbox Digest. Volume 3, Number 2, Fall 2004
ERIC Educational Resources Information Center
AEL, 2004
2004-01-01
Grid computing allows large groups of computers (either in a lab, or remote and connected only by the Internet) to extend extra processing power to each individual computer to work on components of a complex request. Grid middleware, recognizing priorities set by systems administrators, allows the grid to identify and use this power without…
Computing the Feasible Spaces of Optimal Power Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Computing the Feasible Spaces of Optimal Power Flow Problems
Molzahn, Daniel K.
2017-03-15
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Evaluation of the Lattice-Boltzmann Equation Solver PowerFLOW for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Lockard, David P.; Luo, Li-Shi; Singer, Bart A.; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
A careful comparison of the performance of a commercially available Lattice-Boltzmann Equation solver (Power-FLOW) was made with a conventional, block-structured computational fluid-dynamics code (CFL3D) for the flow over a two-dimensional NACA-0012 airfoil. The results suggest that the version of PowerFLOW used in the investigation produced solutions with large errors in the computed flow field; these errors are attributed to inadequate resolution of the boundary layer for reasons related to grid resolution and primitive turbulence modeling. The requirement of square grid cells in the PowerFLOW calculations limited the number of points that could be used to span the boundary layer on the wing and still keep the computation size small enough to fit on the available computers. Although not discussed in detail, disappointing results were also obtained with PowerFLOW for a cavity flow and for the flow around a generic helicopter configuration.
Quantum computing with incoherent resources and quantum jumps.
Santos, M F; Cunha, M Terra; Chaves, R; Carvalho, A R R
2012-04-27
Spontaneous emission and the inelastic scattering of photons are two natural processes usually associated with decoherence and the reduction in the capacity to process quantum information. Here we show that, when suitably detected, these photons are sufficient to build all the fundamental blocks needed to perform quantum computation in the emitting qubits while protecting them from deleterious dissipative effects. We exemplify this by showing how to efficiently prepare graph states for the implementation of measurement-based quantum computation.
High-precision arithmetic in mathematical physics
Bailey, David H.; Borwein, Jonathan M.
2015-05-12
For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.
Collaborative Autonomous Unmanned Aerial - Ground Vehicle Systems for Field Operations
2007-08-31
very limited payload capabilities of small UVs, sacrificing minimal computational power and run time, adhering at the same time to the low cost...configuration has been chosen because of its high computational capabilities, low power consumption, multiple I/O ports, size, low heat emission and cost. This...due to their high power to weight ratio, small packaging, and wide operating temperatures. Power distribution is controlled by the 120 Watt ATX power
Kim, Mary S.; Tsutsui, Kenta; Stern, Michael D.; Lakatta, Edward G.; Maltsev, Victor A.
2017-01-01
Local Ca2+ Releases (LCRs) are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA) node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame) sensitivity algorithm applied to each pixel (cell location). An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves), sparks and embers in muscle cells and Ca2+ puffs and syntillas in neurons. PMID:28683095
Synthetic Elucidation of Design Principles for Molecular Qubits
NASA Astrophysics Data System (ADS)
Graham, Michael James
Quantum information processing (QIP) is an emerging computational paradigm with the potential to enable a vast increase in computational power, fundamentally transforming fields from structural biology to finance. QIP employs qubits, or quantum bits, as its fundamental units of information, which can exist in not just the classical states of 0 or 1, but in a superposition of the two. In order to successfully perform QIP, this superposition state must be sufficiently long-lived. One promising paradigm for the implementation of QIP involves employing unpaired electrons in coordination complexes as qubits. This architecture is highly tunable and scalable, however coordination complexes frequently suffer from short superposition lifetimes, or T2. In order to capitalize on the promise of molecular qubits, it is necessary to develop a set of design principles that allow the rational synthesis of complexes with sufficiently long values of T2. In this dissertation, I report efforts to use the synthesis of series of complexes to elucidate design principles for molecular qubits. Chapter 1 details previous work by our group and others in the field. Chapter 2 details the first efforts of our group to determine the impact of varying spin and spin-orbit coupling on T2. Chapter 3 examines the effect of removing nuclear spins on coherence time, and reports a series of vanadyl bis(dithiolene) complexes which exhibit extremely long coherence lifetimes, in excess of the 100 mus threshold for qubit viability. Chapters 4 and 5 form two complimentary halves of a study to determine the exact relationship between electronic spin-nuclear spin distance and the effect of the nuclear spins on T2. Finally, chapter 6 suggests next directions for the field as a whole, including the potential for work in this field to impact the development of other technologies as diverse as quantum sensors and magnetic resonance imaging contrast agents.
Raith, Stefan; Vogel, Eric Per; Anees, Naeema; Keul, Christine; Güth, Jan-Frederik; Edelhoff, Daniel; Fischer, Horst
2017-01-01
Chairside manufacturing based on digital image acquisition is gainingincreasing importance in dentistry. For the standardized application of these methods, it is paramount to have highly automated digital workflows that can process acquired 3D image data of dental surfaces. Artificial Neural Networks (ANNs) arenumerical methods primarily used to mimic the complex networks of neural connections in the natural brain. Our hypothesis is that an ANNcan be developed that is capable of classifying dental cusps with sufficient accuracy. This bears enormous potential for an application in chairside manufacturing workflows in the dental field, as it closes the gap between digital acquisition of dental geometries and modern computer-aided manufacturing techniques.Three-dimensional surface scans of dental casts representing natural full dental arches were transformed to range image data. These data were processed using an automated algorithm to detect candidates for tooth cusps according to salient geometrical features. These candidates were classified following common dental terminology and used as training data for a tailored ANN.For the actual cusp feature description, two different approaches were developed and applied to the available data: The first uses the relative location of the detected cusps as input data and the second method directly takes the image information given in the range images. In addition, a combination of both was implemented and investigated.Both approaches showed high performance with correct classifications of 93.3% and 93.5%, respectively, with improvements by the combination shown to be minor.This article presents for the first time a fully automated method for the classification of teeththat could be confirmed to work with sufficient precision to exhibit the potential for its use in clinical practice,which is a prerequisite for automated computer-aided planning of prosthetic treatments with subsequent automated chairside manufacturing. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brun, B.
1997-07-01
Computer technology has improved tremendously during the last years with larger media capacity, memory and more computational power. Visual computing with high-performance graphic interface and desktop computational power have changed the way engineers accomplish everyday tasks, development and safety studies analysis. The emergence of parallel computing will permit simulation over a larger domain. In addition, new development methods, languages and tools have appeared in the last several years.
Experimental investigation of fan-folded piezoelectric energy harvesters for powering pacemakers
Ansari, M H; Karami, M Amin
2018-01-01
This paper studies the fabrication and testing of a magnet free piezoelectric energy harvester (EH) for powering biomedical devices and sensors inside the body. The design for the EH is a fan-folded structure consisting of bimorph piezoelectric beams folding on top of each other. An actual size experimental prototype is fabricated to verify the developed analytical models. The model is verified by matching the analytical results of the tip acceleration frequency response functions (FRF) and voltage FRF with the experimental results. The generated electricity is measured when the EH is excited by the heartbeat. A closed loop shaker system is utilized to reproduce the heartbeat vibrations. Achieving low fundamental natural frequency is a key factor to generate sufficient energy for pacemakers using heartbeat vibrations. It is shown that the natural frequency of the small-scale device is less than 20 Hz due to its unique fan-folded design. The experimental results show that the small-scale EH generates sufficient power for state of the art pacemakers. The 1 cm3 EH with18.4 gr tip mass generates more than16 μW of power from a normal heartbeat waveform. The robustness of the device to the heart rate is also studied by measuring the relation between the power output and the heart rate. PMID:29674807
Experimental investigation of fan-folded piezoelectric energy harvesters for powering pacemakers.
Ansari, M H; Karami, M Amin
2017-06-01
This paper studies the fabrication and testing of a magnet free piezoelectric energy harvester (EH) for powering biomedical devices and sensors inside the body. The design for the EH is a fan-folded structure consisting of bimorph piezoelectric beams folding on top of each other. An actual size experimental prototype is fabricated to verify the developed analytical models. The model is verified by matching the analytical results of the tip acceleration frequency response functions (FRF) and voltage FRF with the experimental results. The generated electricity is measured when the EH is excited by the heartbeat. A closed loop shaker system is utilized to reproduce the heartbeat vibrations. Achieving low fundamental natural frequency is a key factor to generate sufficient energy for pacemakers using heartbeat vibrations. It is shown that the natural frequency of the small-scale device is less than 20 Hz due to its unique fan-folded design. The experimental results show that the small-scale EH generates sufficient power for state of the art pacemakers. The 1 cm 3 EH with18.4 gr tip mass generates more than16 μ W of power from a normal heartbeat waveform. The robustness of the device to the heart rate is also studied by measuring the relation between the power output and the heart rate.
40 CFR 92.127 - Emission measurement accuracy.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...
40 CFR 92.127 - Emission measurement accuracy.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...
40 CFR 92.127 - Emission measurement accuracy.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...
40 CFR 92.127 - Emission measurement accuracy.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...
40 CFR 92.127 - Emission measurement accuracy.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Emission measurement accuracy. (a) Good engineering practice dictates that exhaust emission sample analyzer... resolution read-out systems such as computers, data loggers, etc., can provide sufficient accuracy and...
NASA Astrophysics Data System (ADS)
Jeevargi, Chetankumar; Lodhi, Anuj; Sateeshkumar, Allu; Elangovan, D.; Arunkumar, G.
2017-11-01
The need for Renewable Energy Sources (RES) is increasing due to increased demand for the supply of power and it is also environment friendly.In the recent few years, the cost of generation of the power from the RES has been decreased. This paper aims to design the front end power converter which is required for integrating the fuel cells and solar power sources to the micro grid. The simulation of the designed front end converter is carried out in the PSIM 9.1.1 software. The results show that the designed front end power converter is sufficient for integrating the micro grid with fuel cells and solar power sources.
2011-09-01
supply for the IMU switching 5, 12V ATX power supply for the computer and hard drive An L1/L2 active antenna on small back plane USB to serial...switching 5, 12V ATX power supply for the computer and hard drive Figure 4. UAS Target Location Technology for Ground Based Observers (TLGBO...15V power supply for the IMU H. switching 5, 12V ATX power supply for the computer & hard drive I. An L1/L2 active antenna on a small back
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-12-04
The software serves two purposes. The first purpose of the software is to prototype the Sandia High Performance Computing Power Application Programming Interface Specification effort. The specification can be found at http://powerapi.sandia.gov . Prototypes of the specification were developed in parallel with the development of the specification. Release of the prototype will be instructive to anyone who intends to implement the specification. More specifically, our vendor collaborators will benefit from the availability of the prototype. The second is in direct support of the PowerInsight power measurement device, which was co-developed with Penguin Computing. The software provides a cluster wide measurementmore » capability enabled by the PowerInsight device. The software can be used by anyone who purchases a PowerInsight device. The software will allow the user to easily collect power and energy information of a node that is instrumented with PowerInsight. The software can also be used as an example prototype implementation of the High Performance Computing Power Application Programming Interface Specification.« less
Haidar, Azzam; Jagode, Heike; Vaccaro, Phil; ...
2018-03-22
The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haidar, Azzam; Jagode, Heike; Vaccaro, Phil
The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less
A Bidirectional Brain-Machine Interface Featuring a Neuromorphic Hardware Decoder.
Boi, Fabio; Moraitis, Timoleon; De Feo, Vito; Diotalevi, Francesco; Bartolozzi, Chiara; Indiveri, Giacomo; Vato, Alessandro
2016-01-01
Bidirectional brain-machine interfaces (BMIs) establish a two-way direct communication link between the brain and the external world. A decoder translates recorded neural activity into motor commands and an encoder delivers sensory information collected from the environment directly to the brain creating a closed-loop system. These two modules are typically integrated in bulky external devices. However, the clinical support of patients with severe motor and sensory deficits requires compact, low-power, and fully implantable systems that can decode neural signals to control external devices. As a first step toward this goal, we developed a modular bidirectional BMI setup that uses a compact neuromorphic processor as a decoder. On this chip we implemented a network of spiking neurons built using its ultra-low-power mixed-signal analog/digital circuits. On-chip on-line spike-timing-dependent plasticity synapse circuits enabled the network to learn to decode neural signals recorded from the brain into motor outputs controlling the movements of an external device. The modularity of the BMI allowed us to tune the individual components of the setup without modifying the whole system. In this paper, we present the features of this modular BMI and describe how we configured the network of spiking neuron circuits to implement the decoder and to coordinate it with the encoder in an experimental BMI paradigm that connects bidirectionally the brain of an anesthetized rat with an external object. We show that the chip learned the decoding task correctly, allowing the interfaced brain to control the object's trajectories robustly. Based on our demonstration, we propose that neuromorphic technology is mature enough for the development of BMI modules that are sufficiently low-power and compact, while being highly computationally powerful and adaptive.
A Bidirectional Brain-Machine Interface Featuring a Neuromorphic Hardware Decoder
Boi, Fabio; Moraitis, Timoleon; De Feo, Vito; Diotalevi, Francesco; Bartolozzi, Chiara; Indiveri, Giacomo; Vato, Alessandro
2016-01-01
Bidirectional brain-machine interfaces (BMIs) establish a two-way direct communication link between the brain and the external world. A decoder translates recorded neural activity into motor commands and an encoder delivers sensory information collected from the environment directly to the brain creating a closed-loop system. These two modules are typically integrated in bulky external devices. However, the clinical support of patients with severe motor and sensory deficits requires compact, low-power, and fully implantable systems that can decode neural signals to control external devices. As a first step toward this goal, we developed a modular bidirectional BMI setup that uses a compact neuromorphic processor as a decoder. On this chip we implemented a network of spiking neurons built using its ultra-low-power mixed-signal analog/digital circuits. On-chip on-line spike-timing-dependent plasticity synapse circuits enabled the network to learn to decode neural signals recorded from the brain into motor outputs controlling the movements of an external device. The modularity of the BMI allowed us to tune the individual components of the setup without modifying the whole system. In this paper, we present the features of this modular BMI and describe how we configured the network of spiking neuron circuits to implement the decoder and to coordinate it with the encoder in an experimental BMI paradigm that connects bidirectionally the brain of an anesthetized rat with an external object. We show that the chip learned the decoding task correctly, allowing the interfaced brain to control the object's trajectories robustly. Based on our demonstration, we propose that neuromorphic technology is mature enough for the development of BMI modules that are sufficiently low-power and compact, while being highly computationally powerful and adaptive. PMID:28018162
NASA Technical Reports Server (NTRS)
Kahn, R. D.; Thurman, S.; Edwards, C.
1994-01-01
Doppler and ranging measurements between spacecraft can be obtained only when the ratio of the total received signal power to noise power density (P(sub t)/N(sub 0)) at the receiving spacecraft is sufficiently large that reliable signal detection can be achieved within a reasonable time period. In this article, the requirement on P(sub t)/N(sub 0) for reliable carrier signal detection is calculated as a function of various system parameters, including characteristics of the spacecraft computing hardware and a priori uncertainty in spacecraft-spacecraft relative velocity and acceleration. Also calculated is the P(sub t)/N(sub 0) requirements for reliable detection of a ranging signal, consisting of a carrier with pseudonoise (PN) phase modulation. Once the P(sub t)/N(sub 0) requirement is determined, then for a given set of assumed spacecraft telecommunication characteristics (transmitted signal power, antenna gains, and receiver noise temperatures) it is possible to calculate the maximum range at which a carrier signal or ranging signal may be acquired. For example, if a Mars lander and a spacecraft approaching Mars are each equipped with 1-m-diameter antennas, the transmitted power is 5 W, and the receiver noise temperatures are 350 K, then S-band carrier signal acquisition can be achieved at ranges exceeding 10 million km. An error covariance analysis illustrates the utility of in situ Doppler and ranging measurements for Mars approach navigation. Covariance analysis results indicate that navigation accuracies of a few km can be achieved with either data type. The analysis also illustrates dependency of the achievable accuracy on the approach trajectory velocity.
Development of combined low-emissions burner devices for low-power boilers
NASA Astrophysics Data System (ADS)
Roslyakov, P. V.; Proskurin, Yu. V.; Khokhlov, D. A.
2017-08-01
Low-power water boilers are widely used for autonomous heat supply in various industries. Firetube and water-tube boilers of domestic and foreign manufacturers are widely represented on the Russian market. However, even Russian boilers are supplied with licensed foreign burner devices, which reduce their competitiveness and complicate operating conditions. A task of developing efficient domestic low-emissions burner devices for low-power boilers is quite acute. A characteristic property of ignition and fuel combustion in such boilers is their flowing in constrained conditions due to small dimensions of combustion chambers and flame tubes. These processes differ significantly from those in open combustion chambers of high-duty power boilers, and they have not been sufficiently studied yet. The goals of this paper are studying the processes of ignition and combustion of gaseous and liquid fuels, heat and mass transfer and NO x emissions in constrained conditions, and the development of a modern combined low-emissions 2.2 MW burner device that provides efficient fuel combustion. A burner device computer model is developed and numerical studies of its operation on different types of fuel in a working load range from 40 to 100% of the nominal are carried out. The main features of ignition and combustion of gaseous and liquid fuels in constrained conditions of the flame tube at nominal and decreased loads are determined, which differ fundamentally from the similar processes in steam boiler furnaces. The influence of the burner devices design and operating conditions on the fuel underburning and NO x formation is determined. Based on the results of the design studies, a design of the new combined low-emissions burner device is proposed, which has several advantages over the prototype.
Dual-purpose self-deliverable lunar surface PV electrical power system
NASA Technical Reports Server (NTRS)
Arnold, Jack H.; Harris, David W.; Cross, Eldon R.; Flood, Dennis J.
1991-01-01
A safe haven and work supported PV power systems on the lunar surface will likely be required by NASA in support of the manned outpost scheduled for the post-2000 lunar/Mars exploration and colonization initiative. Initial system modeling and computer analysis shows that the concept is workable and contains no major high risk technology issues which cannot be resolved in the circa 2000 to 2025 timeframe. A specific selection of the best suited type of electric thruster has not been done; the initial modeling was done using an ion thruster, but Rocketdyne must also evaluate arc and resisto-jets before a final design can be formulated. As a general observation, it appears that such a system can deliver itself to the Moon using many system elements that must be transported as dead payload mass in more conventional delivery modes. It further appears that a larger power system providing a much higher safe haven power level is feasible if this delivery system is implemented, perhaps even sufficient to permit resource prospecting and/or lab experimentation. The concept permits growth and can be expanded to include cargo transport such as habitat and working modules. In short, the combined payload could be manned soon after landing and checkout. NASA has expended substantial resources in the development of electric propulsion concepts and hardware that can be applied to a lunar transport system such as described herein. In short, the paper may represent a viable mission on which previous investments play an invaluable role. A more comprehensive technical paper which embodies second generation analysis and system size will be prepared for near-term presentation.
2013-01-01
Background Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Methods Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. Results The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. Conclusions The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r < 0.7). Substantially greater power can be achieved when comparing measures of a very high correlation (r > 0.9). PMID:23721463
Human dynamics scaling characteristics for aerial inbound logistics operation
NASA Astrophysics Data System (ADS)
Wang, Qing; Guo, Jin-Li
2010-05-01
In recent years, the study of power-law scaling characteristics of real-life networks has attracted much interest from scholars; it deviates from the Poisson process. In this paper, we take the whole process of aerial inbound operation in a logistics company as the empirical object. The main aim of this work is to study the statistical scaling characteristics of the task-restricted work patterns. We found that the statistical variables have the scaling characteristics of unimodal distribution with a power-law tail in five statistical distributions - that is to say, there obviously exists a peak in each distribution, the shape of the left part closes to a Poisson distribution, and the right part has a heavy-tailed scaling statistics. Furthermore, to our surprise, there is only one distribution where the right parts can be approximated by the power-law form with exponent α=1.50. Others are bigger than 1.50 (three of four are about 2.50, one of four is about 3.00). We then obtain two inferences based on these empirical results: first, the human behaviors probably both close to the Poisson statistics and power-law distributions on certain levels, and the human-computer interaction behaviors may be the most common in the logistics operational areas, even in the whole task-restricted work pattern areas. Second, the hypothesis in Vázquez et al. (2006) [A. Vázquez, J. G. Oliveira, Z. Dezsö, K.-I. Goh, I. Kondor, A.-L. Barabási. Modeling burst and heavy tails in human dynamics, Phys. Rev. E 73 (2006) 036127] is probably not sufficient; it claimed that human dynamics can be classified as two discrete university classes. There may be a new human dynamics mechanism that is different from the classical Barabási models.
A simplified simulation model for a HPDC die with conformal cooling channels
NASA Astrophysics Data System (ADS)
Frings, Markus; Behr, Marek; Elgeti, Stefanie
2017-10-01
In general, the cooling phase of the high-pressure die casting process is based on complex physical phenomena: so-lidification of molten material; heat exchange between cast part, die and cooling fluid; turbulent flow inside the cooling channels that needs to be considered when computing the heat flux; interdependency of properties and temperature of the cooling liquid. Intuitively understanding and analyzing all of these effects when designing HPDC dies is not feasible. A remedy that has become available is numerical design, based for example on shape optimization methods. However, current computing power is not sufficient to perform optimization while at the same time fully resolving all physical phenomena. But since in HPDC suitable objective functions very often lead to integral values, e.g., average die temperature, this paper identifies possible simplifications in the modeling of the cooling phase. As a consequence, the computational effort is reduced to an acceptable level. A further aspect that arises in the context of shape optimization is the evaluation of shape gradients. The challenge here is to allow for large shape deformations without remeshing. In our approach, the cooling channels are described by their center lines. The flow profile of the cooling fluid is then estimated based on experimental data found in literature for turbulent pipe flows. In combination, the heat flux throughout cavity, die, and cooling channel can be described by one single advection-diffusion equation on a fixed mesh. The parameters in the equation are adjusted based on the position of cavity and cooling channel. Both results contribute towards a computationally efficient, yet accurate method, which can be employed within the frame of shape optimization of cooling channels in HPDC dies.
OnGuard, a Computational Platform for Quantitative Kinetic Modeling of Guard Cell Physiology1[W][OA
Hills, Adrian; Chen, Zhong-Hua; Amtmann, Anna; Blatt, Michael R.; Lew, Virgilio L.
2012-01-01
Stomatal guard cells play a key role in gas exchange for photosynthesis while minimizing transpirational water loss from plants by opening and closing the stomatal pore. Foliar gas exchange has long been incorporated into mathematical models, several of which are robust enough to recapitulate transpirational characteristics at the whole-plant and community levels. Few models of stomata have been developed from the bottom up, however, and none are sufficiently generalized to be widely applicable in predicting stomatal behavior at a cellular level. We describe here the construction of computational models for the guard cell, building on the wealth of biophysical and kinetic knowledge available for guard cell transport, signaling, and homeostasis. The OnGuard software was constructed with the HoTSig library to incorporate explicitly all of the fundamental properties for transporters at the plasma membrane and tonoplast, the salient features of osmolite metabolism, and the major controls of cytosolic-free Ca2+ concentration and pH. The library engenders a structured approach to tier and interrelate computational elements, and the OnGuard software allows ready access to parameters and equations ‘on the fly’ while enabling the network of components within each model to interact computationally. We show that an OnGuard model readily achieves stability in a set of physiologically sensible baseline or Reference States; we also show the robustness of these Reference States in adjusting to changes in environmental parameters and the activities of major groups of transporters both at the tonoplast and plasma membrane. The following article addresses the predictive power of the OnGuard model to generate unexpected and counterintuitive outputs. PMID:22635116
Predicting solar radiation based on available weather indicators
NASA Astrophysics Data System (ADS)
Sauer, Frank Joseph
Solar radiation prediction models are complex and require software that is not available for the household investor. The processing power within a normal desktop or laptop computer is sufficient to calculate similar models. This barrier to entry for the average consumer can be fixed by a model simple enough to be calculated by hand if necessary. Solar radiation modeling has been historically difficult to predict and accurate models have significant assumptions and restrictions on their use. Previous methods have been limited to linear relationships, location restrictions, or input data limits to one atmospheric condition. This research takes a novel approach by combining two techniques within the computational limits of a household computer; Clustering and Hidden Markov Models (HMMs). Clustering helps limit the large observation space which restricts the use of HMMs. Instead of using continuous data, and requiring significantly increased computations, the cluster can be used as a qualitative descriptor of each observation. HMMs incorporate a level of uncertainty and take into account the indirect relationship between meteorological indicators and solar radiation. This reduces the complexity of the model enough to be simply understood and accessible to the average household investor. The solar radiation is considered to be an unobservable state that each household will be unable to measure. The high temperature and the sky coverage are already available through the local or preferred source of weather information. By using the next day's prediction for high temperature and sky coverage, the model groups the data and then predicts the most likely range of radiation. This model uses simple techniques and calculations to give a broad estimate for the solar radiation when no other universal model exists for the average household.
ERIC Educational Resources Information Center
Miller, Willis H.
1979-01-01
Surveys America's current energy situation and considers means of attaining domestic energy self-sufficiency. Information is presented on hazards of decreasing energy production, traditional energy sources, and exotic energy sources. (Author/DB)
Computer modeling and simulators as part of university training for NPP operating personnel
NASA Astrophysics Data System (ADS)
Volman, M.
2017-01-01
This paper considers aspects of a program for training future nuclear power plant personnel developed by the NPP Department of Ivanovo State Power Engineering University. Computer modeling is used for numerical experiments on the kinetics of nuclear reactors in Mathcad. Simulation modeling is carried out on the computer and full-scale simulator of water-cooled power reactor for the simulation of neutron-physical reactor measurements and the start-up - shutdown process.
The Experimental Mathematician: The Pleasure of Discovery and the Role of Proof
ERIC Educational Resources Information Center
Borwein, Jonathan M.
2005-01-01
The emergence of powerful mathematical computing environments, the growing availability of correspondingly powerful (multi-processor) computers and the pervasive presence of the Internet allow for mathematicians, students and teachers, to proceed heuristically and "quasi-inductively." We may increasingly use symbolic and numeric computation,…
Wireless power charging using point of load controlled high frequency power converters
Miller, John M.; Campbell, Steven L.; Chambon, Paul H.; Seiber, Larry E.; White, Clifford P.
2015-10-13
An apparatus for wirelessly charging a battery of an electric vehicle is provided with a point of load control. The apparatus includes a base unit for generating a direct current (DC) voltage. The base unit is regulated by a power level controller. One or more point of load converters can be connected to the base unit by a conductor, with each point of load converter comprising a control signal generator that transmits a signal to the power level controller. The output power level of the DC voltage provided by the base unit is controlled by power level controller such that the power level is sufficient to power all active load converters when commanded to do so by any of the active controllers, without generating excessive power that may be otherwise wasted.
A cyclostationary multi-domain analysis of fluid instability in Kaplan turbines
NASA Astrophysics Data System (ADS)
Pennacchi, P.; Borghesani, P.; Chatterton, S.
2015-08-01
Hydraulic instabilities represent a critical problem for Francis and Kaplan turbines, reducing their useful life due to increase of fatigue on the components and cavitation phenomena. Whereas an exhaustive list of publications on computational fluid-dynamic models of hydraulic instability is available, the possibility of applying diagnostic techniques based on vibration measurements has not been investigated sufficiently, also because the appropriate sensors seldom equip hydro turbine units. The aim of this study is to fill this knowledge gap and to exploit fully, for this purpose, the potentiality of combining cyclostationary analysis tools, able to describe complex dynamics such as those of fluid-structure interactions, with order tracking procedures, allowing domain transformations and consequently the separation of synchronous and non-synchronous components. This paper will focus on experimental data obtained on a full-scale Kaplan turbine unit, operating in a real power plant, tackling the issues of adapting such diagnostic tools for the analysis of hydraulic instabilities and proposing techniques and methodologies for a highly automated condition monitoring system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozaki, N.; Nellis, W. J.; Mashimo, T.
Materials at high pressures and temperatures are of great current interest for warm dense matter physics, planetary sciences, and inertial fusion energy research. Shock-compression equation-of-state data and optical reflectivities of the fluid dense oxide, Gd 3Ga 5O 12 (GGG), were measured at extremely high pressures up to 2.6 TPa (26 Mbar) generated by high-power laser irradiation and magnetically-driven hypervelocity impacts. Above 0.75 TPa, the GGG Hugoniot data approach/reach a universal linear line of fluid metals, and the optical reflectivity most likely reaches a constant value indicating that GGG undergoes a crossover from fluid semiconductor to poor metal with minimum metallicmore » conductivity (MMC). These results suggest that most fluid compounds, e.g., strong planetary oxides, reach a common state on the universal Hugoniot of fluid metals (UHFM) with MMC at sufficiently extreme pressures and temperatures. Lastly, the systematic behaviors of warm dense fluid would be useful benchmarks for developing theoretical equation-of-state and transport models in the warm dense matter regime in determining computational predictions.« less
Simulation of FRET dyes allows quantitative comparison against experimental data
NASA Astrophysics Data System (ADS)
Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander
2018-03-01
Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.
Efficient model for low-energy transverse beam dynamics in a nine-cell 1.3 GHz cavity
NASA Astrophysics Data System (ADS)
Hellert, Thorsten; Dohlus, Martin; Decking, Winfried
2017-10-01
FLASH and the European XFEL are SASE-FEL user facilities, at which superconducting TESLA cavities are operated in a pulsed mode to accelerate long bunch-trains. Several cavities are powered by one klystron. While the low-level rf system is able to stabilize the vector sum of the accelerating gradient of one rf station sufficiently, the rf parameters of individual cavities vary within the bunch-train. In correlation with misalignments, intrabunch-train trajectory variations are induced. An efficient model is developed to describe the effect at low beam energy, using numerically adjusted transfer matrices and discrete coupler kick coefficients, respectively. Comparison with start-to-end tracking and dedicated experiments at the FLASH injector will be shown. The short computation time of the derived model allows for comprehensive numerical studies on the impact of misalignments and variable rf parameters on the transverse intra-bunch-train beam stability at the injector module. Results from both, statistical multibunch performance studies and the deduction of misalignments from multibunch experiments are presented.
The MST radar technique: Requirements for operational weather forecasting
NASA Technical Reports Server (NTRS)
Larsen, M. F.
1983-01-01
There is a feeling that the accuracy of mesoscale forecasts for spatial scales of less than 1000 km and time scales of less than 12 hours can be improved significantly if resources are applied to the problem in an intensive effort over the next decade. Since the most dangerous and damaging types of weather occur at these scales, there are major advantages to be gained if such a program is successful. The interest in improving short term forecasting is evident. The technology at the present time is sufficiently developed, both in terms of new observing systems and the computing power to handle the observations, to warrant an intensive effort to improve stormscale forecasting. An assessment of the extent to which the so-called MST radar technique fulfills the requirements for an operational mesoscale observing network is reviewed and the extent to which improvements in various types of forecasting could be expected if such a network is put into operation are delineated.
Study of a new cusp field for an 18 GHz ECR ion source
NASA Astrophysics Data System (ADS)
Rashid, M. H.; Nakagawa, T.; Goto, A.; Yano, Y.
2007-08-01
A feasibility study was performed to generate new sufficient mirror cusp magnetic field (CMF) by using the coils of the existing room temperature traditional 18 GHz electron cyclotron resonance ion source (ECRIS) at RIKEN. The CMF configuration was chosen because it contains plasma superbly and no multipole magnet is needed to make the contained plasma quiescent with no magneto-hydrodynamic (MHD) instability and to make the system cost-effective. The least magnetic field, 13 kG is achieved at the interior wall of the plasma chamber including the point cusps (PC) on the central axis and the ring cusp (RC) on the mid-plane. The mirror ratio calculation and electron simulation were done in the computed CMF. It was found to contain the electrons for longer time than in traditional field. It is proposed that a powerful CMF ECRIS can be constructed, which is capable of producing intense highly charged ion (HCI) beam for light and heavy elements.
Real-time 3D adaptive filtering for portable imaging systems
NASA Astrophysics Data System (ADS)
Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark
2015-03-01
Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.
Flexible, fast and accurate sequence alignment profiling on GPGPU with PaSWAS.
Warris, Sven; Yalcin, Feyruz; Jackson, Katherine J L; Nap, Jan Peter
2015-01-01
To obtain large-scale sequence alignments in a fast and flexible way is an important step in the analyses of next generation sequencing data. Applications based on the Smith-Waterman (SW) algorithm are often either not fast enough, limited to dedicated tasks or not sufficiently accurate due to statistical issues. Current SW implementations that run on graphics hardware do not report the alignment details necessary for further analysis. With the Parallel SW Alignment Software (PaSWAS) it is possible (a) to have easy access to the computational power of NVIDIA-based general purpose graphics processing units (GPGPUs) to perform high-speed sequence alignments, and (b) retrieve relevant information such as score, number of gaps and mismatches. The software reports multiple hits per alignment. The added value of the new SW implementation is demonstrated with two test cases: (1) tag recovery in next generation sequence data and (2) isotype assignment within an immunoglobulin 454 sequence data set. Both cases show the usability and versatility of the new parallel Smith-Waterman implementation.
Software Support for Transiently Powered Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Der Woude, Joel Matthew
With the continued reduction in size and cost of computing, power becomes an increasingly heavy burden on system designers for embedded applications. While energy harvesting techniques are an increasingly desirable solution for many deeply embedded applications where size and lifetime are a priority, previous work has shown that energy harvesting provides insufficient power for long running computation. We present Ratchet, which to the authors knowledge is the first automatic, software-only checkpointing system for energy harvesting platforms. We show that Ratchet provides a means to extend computation across power cycles, consistent with those experienced by energy harvesting devices. We demonstrate themore » correctness of our system under frequent failures and show that it has an average overhead of 58.9% across a suite of benchmarks representative for embedded applications.« less