Balancing computation and communication power in power constrained clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piga, Leonardo; Paul, Indrani; Huang, Wei
Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agentmore » may be configured to reassign the unused power to the active nodes to expedite workload processing.« less
Cloud Computing and the Power to Choose
ERIC Educational Resources Information Center
Bristow, Rob; Dodds, Ted; Northam, Richard; Plugge, Leo
2010-01-01
Some of the most significant changes in information technology are those that have given the individual user greater power to choose. The first of these changes was the development of the personal computer. The PC liberated the individual user from the limitations of the mainframe and minicomputers and from the rules and regulations of centralized…
Agent-Based Multicellular Modeling for Predictive Toxicology
Biological modeling is a rapidly growing field that has benefited significantly from recent technological advances, expanding traditional methods with greater computing power, parameter-determination algorithms, and the development of novel computational approaches to modeling bi...
Voltage profile program for the Kennedy Space Center electric power distribution system
NASA Technical Reports Server (NTRS)
1976-01-01
The Kennedy Space Center voltage profile program computes voltages at all busses greater than 1 Kv in the network under various conditions of load. The computation is based upon power flow principles and utilizes a Newton-Raphson iterative load flow algorithm. Power flow conditions throughout the network are also provided. The computer program is designed for both steady state and transient operation. In the steady state mode, automatic tap changing of primary distribution transformers is incorporated. Under transient conditions, such as motor starts etc., it is assumed that tap changing is not accomplished so that transformer secondary voltage is allowed to sag.
Research on spacecraft electrical power conversion
NASA Technical Reports Server (NTRS)
Wilson, T. G.
1983-01-01
The history of spacecraft electrical power conversion in literature, research and practice is reviewed. It is noted that the design techniques, analyses and understanding which were developed make today's contribution to power computers and communication installations. New applications which require more power, improved dynamic response, greater reliability, and lower cost are outlined. The switching mode approach in electronic power conditioning is discussed. Technical aspects of the research are summarized.
The transforming effect of handheld computers on nursing practice.
Thompson, Brent W
2005-01-01
Handheld computers have the power to transform nursing care. The roots of this power are the shift to decentralization of communication, electronic health records, and nurses' greater need for information at the point of care. This article discusses the effects of handheld resources, calculators, databases, electronic health records, and communication devices on nursing practice. The US government has articulated the necessity of implementing the use of handheld computers in healthcare. Nurse administrators need to encourage and promote the diffusion of this technology, which can reduce costs and improve care.
Measured energy savings and performance of power-managed personal computers and monitors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordman, B.; Piette, M.A.; Kinney, K.
1996-08-01
Personal computers and monitors are estimated to use 14 billion kWh/year of electricity, with power management potentially saving $600 million/year by the year 2000. The effort to capture these savings is lead by the US Environmental Protection Agency`s Energy Star program, which specifies a 30W maximum demand for the computer and for the monitor when in a {open_quote}sleep{close_quote} or idle mode. In this paper the authors discuss measured energy use and estimated savings for power-managed (Energy Star compliant) PCs and monitors. They collected electricity use measurements of six power-managed PCs and monitors in their office and five from two othermore » research projects. The devices are diverse in machine type, use patterns, and context. The analysis method estimates the time spent in each system operating mode (off, low-, and full-power) and combines these with real power measurements to derive hours of use per mode, energy use, and energy savings. Three schedules are explored in the {open_quotes}As-operated,{close_quotes} {open_quotes}Standardized,{close_quotes} and `Maximum` savings estimates. Energy savings are established by comparing the measurements to a baseline with power management disabled. As-operated energy savings for the eleven PCs and monitors ranged from zero to 75 kWh/year. Under the standard operating schedule (on 20% of nights and weekends), the savings are about 200 kWh/year. An audit of power management features and configurations for several dozen Energy Star machines found only 11% of CPU`s fully enabled and about two thirds of monitors were successfully power managed. The highest priority for greater power management savings is to enable monitors, as opposed to CPU`s, since they are generally easier to configure, less likely to interfere with system operation, and have greater savings. The difficulties in properly configuring PCs and monitors is the largest current barrier to achieving the savings potential from power management.« less
NASA Technical Reports Server (NTRS)
Ponchak, George E.; Amadjikpe, Arnaud L.; Choudhury, Debabani; Papapolymerou, John
2011-01-01
In this paper, the first measurements of the received radiated power between antennas located on a conference table to simulate the environment of antennas embedded in laptop computers for 60 GHz Wireless Personal Area Network (WPAN) applications is presented. A high gain horn antenna and a medium gain microstrip patch antenna for two linear polarizations are compared. It is shown that for a typical conference table arrangement with five computers, books, pens, and coffee cups, the antennas should be placed a minimum of 5 cm above the table, but that a height of greater than 20 cm may be required to maximize the received power in all cases.
Laserthermia: a new computer-controlled contact Nd: YAG system for interstitial local hyperthermia.
Daikuzono, N; Suzuki, S; Tajiri, H; Tsunekawa, H; Ohyama, M; Joffe, S N
1988-01-01
Contact Nd:YAG laser surgery is assuming a greater importance in endoscopic and open surgery, allowing coagulation, cutting, and vaporization with greater precision and safety. A new contact probe allows a wider angle of irradiation and diffusion of low-power laser energy (less than 5 watts), using the interstitial technique for producing local hyperthermia. Temperature sensors that monitor continuously can be placed directly into the surrounding tissue or tumor. Using a computer program interfaced with the laser and sensors, a controlled and stable temperature (e.g., 42 degrees C) can be produced in a known volume of tissue over a prolonged period of time (e.g., 20-40 min). This new laserthermia system, using a single low-power Nd:YAG laser for interstitial local hyperthermia, may offer many new advantages in the experimental treatment and clinical management of carcinoma. A multiple system is now being developed.
Tyrrell, Pascal N; Corey, Paul N; Feldman, Brian M; Silverman, Earl D
2013-06-01
Physicians often assess the effectiveness of treatments on a small number of patients. Multiple-baseline designs (MBDs), based on the Wampold-Worsham (WW) method of randomization and applied to four subjects, have relatively low power. Our objective was to propose another approach with greater power that does not suffer from the time requirements of the WW method applied to a greater number of subjects. The power of a design that involves the combination of two four-subject MBDs was estimated using computer simulation and compared with the four- and eight-subject designs. The effect of a delayed linear response to treatment on the power of the test was also investigated. Power was found to be adequate (>80%) for a standardized mean difference (SMD) greater than 0.8. The effect size associated with 80% power from combined tests was smaller than that of the single four-subject MBD (SMD=1.3) and comparable with the eight-subject MBD (SMD=0.6). A delayed linear response to the treatment resulted in important reductions in power (20-35%). By combining two four-subject MBD tests, an investigator can detect better effect sizes (SMD=0.8) and be able to complete a comparatively timelier and feasible study. Copyright © 2013 Elsevier Inc. All rights reserved.
Device 2F112 (F-14A WST (Weapon System Trainers)) Instructor Console Review.
1983-12-01
Cockpit Section-Trainee Station, b. Instructor Operator Station (OS), c. Computer System, d. Wide-Angle Visual System (WAVS), e. Auxiliary Systems. The...relationship of the three stations can be seen in Figure 1. The stations will be reviewed in greater detail in following sections. Fhe computer system...d) Printer 2) TRAINEE AREA 3) HYDRAULIC POWFR ROOM 4) ELEC. POWER/AIR COMPRESSORS 5) COMPUTER /PERIPHERAL AREA Figure 1. Device 2FI12 general layout
Menzies, Kevin
2014-08-13
The growth in simulation capability over the past 20 years has led to remarkable changes in the design process for gas turbines. The availability of relatively cheap computational power coupled to improvements in numerical methods and physical modelling in simulation codes have enabled the development of aircraft propulsion systems that are more powerful and yet more efficient than ever before. However, the design challenges are correspondingly greater, especially to reduce environmental impact. The simulation requirements to achieve a reduced environmental impact are described along with the implications of continued growth in available computational power. It is concluded that achieving the environmental goals will demand large-scale multi-disciplinary simulations requiring significantly increased computational power, to enable optimization of the airframe and propulsion system over the entire operational envelope. However even with massive parallelization, the limits imposed by communications latency will constrain the time required to achieve a solution, and therefore the position of such large-scale calculations in the industrial design process. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y; Glascoe, L
The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less
The potential benefits of photonics in the computing platform
NASA Astrophysics Data System (ADS)
Bautista, Jerry
2005-03-01
The increase in computational requirements for real-time image processing, complex computational fluid dynamics, very large scale data mining in the health industry/Internet, and predictive models for financial markets are driving computer architects to consider new paradigms that rely upon very high speed interconnects within and between computing elements. Further challenges result from reduced power requirements, reduced transmission latency, and greater interconnect density. Optical interconnects may solve many of these problems with the added benefit extended reach. In addition, photonic interconnects provide relative EMI immunity which is becoming an increasing issue with a greater dependence on wireless connectivity. However, to be truly functional, the optical interconnect mesh should be able to support arbitration, addressing, etc. completely in the optical domain with a BER that is more stringent than "traditional" communication requirements. Outlined are challenges in the advanced computing environment, some possible optical architectures and relevant platform technologies, as well roughly sizing these opportunities which are quite large relative to the more "traditional" optical markets.
Study of basic computer competence among public health nurses in Taiwan.
Yang, Kuei-Feng; Yu, Shu; Lin, Ming-Sheng; Hsu, Chia-Ling
2004-03-01
Rapid advances in information technology and media have made distance learning on the Internet possible. This new model of learning allows greater efficiency and flexibility in knowledge acquisition. Since basic computer competence is a prerequisite for this new learning model, this study was conducted to examine the basic computer competence of public health nurses in Taiwan and explore factors influencing computer competence. A national cross-sectional randomized study was conducted with 329 public health nurses. A questionnaire was used to collect data and was delivered by mail. Results indicate that basic computer competence of public health nurses in Taiwan is still needs to be improved (mean = 57.57 +- 2.83, total score range from 26-130). Among the five most frequently used software programs, nurses were most knowledgeable about Word and least knowledgeable about PowerPoint. Stepwise multiple regression analysis revealed eight variables (weekly number of hours spent online at home, weekly amount of time spent online at work, weekly frequency of computer use at work, previous computer training, computer at workplace and Internet access, job position, education level, and age) that significantly influenced computer competence, which accounted for 39.0 % of the variance. In conclusion, greater computer competence, broader educational programs regarding computer technology, and a greater emphasis on computers at work are necessary to increase the usefulness of distance learning via the Internet in Taiwan. Building a user-friendly environment is important in developing this new media model of learning for the future.
Randomized Trial of Desktop Humidifier for Dry Eye Relief in Computer Users.
Wang, Michael T M; Chan, Evon; Ea, Linda; Kam, Clifford; Lu, Yvonne; Misra, Stuti L; Craig, Jennifer P
2017-11-01
Dry eye is a frequently reported problem among computer users. Low relative humidity environments are recognized to exacerbate signs and symptoms of dry eye, yet are common in offices of computer operators. Desktop USB-powered humidifiers are available commercially, but their efficacy for dry eye relief has not been established. This study aims to evaluate the potential for a desktop USB-powered humidifier to improve tear-film parameters, ocular surface characteristics, and subjective comfort of computer users. Forty-four computer users were enrolled in a prospective, masked, randomized crossover study. On separate days, participants were randomized to 1 hour of continuous computer use, with and without exposure to a desktop humidifier. Lipid-layer grade, noninvasive tear-film breakup time, and tear meniscus height were measured before and after computer use. Following the 1-hour period, participants reported whether ocular comfort was greater, equal, or lesser than that at baseline. The desktop humidifier effected a relative difference in humidity between the two environments of +5.4 ± 5.0% (P < .001). Participants demonstrated no significant differences in lipid-layer grade and tear meniscus height between the two environments (all P > .05). However, a relative increase in the median noninvasive tear-film breakup time of +4.0 seconds was observed in the humidified environment (P < .001), which was associated with a higher proportion of subjects reporting greater comfort relative to baseline (36% vs. 5%, P < .001). Even with a modest increase in relative humidity locally, the desktop humidifier shows potential to improve tear-film stability and subjective comfort during computer use.Trial registration no: ACTRN12617000326392.
GP, Douglas; RA, Deula; SE, Connor
2003-01-01
Computer-based order entry is a powerful tool for enhancing patient care. A pilot project in the pediatric department of the Lilongwe Central Hospital (LCH) in Malawi, Africa has demonstrated that computer-based order entry (COE): 1) can be successfully deployed and adopted in resource-poor settings, 2) can be built, deployed and sustained at relatively low cost and with local resources, and 3) has a greater potential to improve patient care in developing than in developed countries. PMID:14728338
Inertial effects on mechanically braked Wingate power calculations.
Reiser, R F; Broker, J P; Peterson, M L
2000-09-01
The standard procedure for determining subject power output from a 30-s Wingate test on a mechanically braked (friction-loaded) ergometer includes only the braking resistance and flywheel velocity in the computations. However, the inertial effects associated with accelerating and decelerating the crank and flywheel also require energy and, therefore, represent a component of the subject's power output. The present study was designed to determine the effects of drive-system inertia on power output calculations. Twenty-eight male recreational cyclists completed Wingate tests on a Monark 324E mechanically braked ergometer (resistance: 8.5% body mass (BM), starting cadence: 60 rpm). Power outputs were then compared using both standard (without inertial contribution) and corrected methods (with inertial contribution) of calculating power output. Relative 5-s peak power and 30-s average power for the corrected method (14.8 +/- 1.2 W x kg(-1) BM; 9.9 +/- 0.7 W x kg(-1) BM) were 20.3% and 3.1% greater than that of the standard method (12.3 +/- 0.7 W x kg(-1) BM; 9.6 +/- 0.7 W x kg(-1) BM), respectively. Relative 5-s minimum power for the corrected method (6.8 +/- 0.7 W x kg(-1) BM) was 6.8% less than that of the standard method (7.3 +/- 0.8 W x kg(-1) BM). The combined differences in the peak power and minimum power produced a fatigue index for the corrected method (54 +/- 5%) that was 31.7% greater than that of the standard method (41 +/- 6%). All parameter differences were significant (P < 0.01). The inertial contribution to power output was dominated by the flywheel; however, the contribution from the crank was evident. These results indicate that the inertial components of the ergometer drive system influence the power output characteristics, requiring care when computing, interpreting, and comparing Wingate results, particularly among different ergometer designs and test protocols.
Power spectrum, correlation function, and tests for luminosity bias in the CfA redshift survey
NASA Astrophysics Data System (ADS)
Park, Changbom; Vogeley, Michael S.; Geller, Margaret J.; Huchra, John P.
1994-08-01
We describe and apply a method for directly computing the power spectrum for the galaxy distribution in the extension of the Center for Astrophysics Redshift Survey. Tests show that our technique accurately reproduces the true power spectrum for k greater than 0.03 h Mpc-1. The dense sampling and large spatial coverage of this survey allow accurate measurement of the redshift-space power spectrum on scales from 5 to approximately 200 h-1 Mpc. The power spectrum has slope n approximately equal -2.1 on small scales (lambda less than or equal 25 h-1 Mpc) and n approximately -1.1 on scales 30 less than lambda less than 120 h-1 Mpc. On larger scales the power spectrum flattens somewhat, but we do not detect a turnover. Comparison with N-body simulations of cosmological models shows that an unbiased, open universe CDM model (OMEGA h = 0.2) and a nonzero cosmological constant (CDM) model (OMEGA h = 0.24, lambdazero = 0.6, b = 1.3) match the CfA power spectrum over the wavelength range we explore. The standard biased CDM model (OMEGA h = 0.5, b = 1.5) fails (99% significance level) because it has insufficient power on scales lambda greater than 30 h-1 Mpc. Biased CDM with a normalization that matches the Cosmic Microwave Background (CMB) anisotropy (OMEGA h = 0.5, b = 1.4, sigma8 (mass) = 1) has too much power on small scales to match the observed galaxy power spectrum. This model with b = 1 matches both Cosmic Background Explorer Satellite (COBE) and the small-scale power spect rum but has insufficient power on scales lambda approximately 100 h-1 Mpc. We derive a formula for the effect of small-scale peculiar velocities on the power spectrum and combine this formula with the linear-regime amplification described by Kaiser to compute an estimate of the real-space power spectrum. Two tests reveal luminosity bias in the galaxy distribution: First, the amplitude of the power spectrum is approximately 40% larger for the brightest 50% of galaxies in volume-limited samples that have Mlim greater than M*. This bias in the power spectrum is independent of scale, consistent with the peaks-bias paradigm for galaxy formation. Second, the distribution of local density around galaxies shows that regions of moderate and high density contain both very bright (M less than M* = -19.2 + 5 log h) and fainter galaxies, but that voids preferentially harbor fainter galaxies (approximately 2 sigma significance level).
ERIC Educational Resources Information Center
O'Hanlon, Charlene; Schaffhauser, Dian
2011-01-01
It's a perfect storm out there, with powerful forces remaking the IT landscape in higher education. On one side, devastating budget cuts are pushing IT departments to identify ever-greater cost savings. On the other, the explosion in mobile devices is pressuring IT to provide anytime, anywhere computing with no downtime. And finally there's…
Efremov, A A; Bratseva, I I
1985-01-01
New method for optimized computing thermoelectric coolers is proposed for the case of variable temperatures within heat-transfer media. The operation of the device is analyzed when the temperature of the cooled medium is greater than the temperature of the heated one, i. e. under conditions of the negative temperature difference. The comparative analysis of the computed and experimental data in values of the cooling and electric power demonstrates fully satisfactory results.
Nonlinearity in Social Service Evaluation: A Primer on Agent-Based Modeling
ERIC Educational Resources Information Center
Israel, Nathaniel; Wolf-Branigin, Michael
2011-01-01
Measurement of nonlinearity in social service research and evaluation relies primarily on spatial analysis and, to a lesser extent, social network analysis. Recent advances in geographic methods and computing power, however, allow for the greater use of simulation methods. These advances now enable evaluators and researchers to simulate complex…
NASA Technical Reports Server (NTRS)
1984-01-01
In a number of feasibility studies of turbine rotor designs, engineers of Cummins Engine Company, Inc.'s turbocharger group have utilized a computer program from COSMIC. Part of Cummins research effort is aimed toward introduction of advanced turbocharged engines that deliver extra power with greater fuel efficiency. Company claims use of COSMIC program substantially reduced software development costs.
NASA Astrophysics Data System (ADS)
Laughlin, R. B.
2012-02-01
Whether physics will contribute significantly to unraveling the secrets of life, the grandest challenge of them all, depends critically on whether proteins and other mesoscale objects exhibit emergent law. By this I mean quantitative relationships among their measured properties that are always true. The jury is still out on the matter, for there is evidence both for and against, but it is spotty, on account of the difficulty of measuring 100 nm - 1000 objects without damaging them quantum mechanically. It is therefore not clear that history will repeat itself. Physics contributed mightily to 20th century materials science through its identification and mastery of powerful macroscopic emergent laws such as crystalline rigidity, superconductivity and ferromagnetism, but it cannot do the same thing in biology, regardless of how powerful computers get, unless nature cooperates. The challenge before us as physicists is therefore not to amass more and more terabytes of data and computational output but rather to search for and, with luck, find operating principles at the scale of life greater than those of chemistry, which is to say, greater than a world ruled by nothing but miraculous accidents.
Robust and Imperceptible Watermarking of Video Streams for Low Power Devices
NASA Astrophysics Data System (ADS)
Ishtiaq, Muhammad; Jaffar, M. Arfan; Khan, Muhammad A.; Jan, Zahoor; Mirza, Anwar M.
With the advent of internet, every aspect of life is going online. From online working to watching videos, everything is now available on the internet. With the greater business benefits, increased availability and other online business advantages, there is a major challenge of security and ownership of data. Videos downloaded from an online store can easily be shared among non-intended or unauthorized users. Invisible watermarking is used to hide copyright protection information in the videos. The existing methods of watermarking are less robust and imperceptible and also the computational complexity of these methods does not suit low power devices. In this paper, we have proposed a new method to address the problem of robustness and imperceptibility. Experiments have shown that our method has better robustness and imperceptibility as well as our method is computationally efficient than previous approaches in practice. Hence our method can easily be applied on low power devices.
Nonlinear power flow feedback control for improved stability and performance of airfoil sections
Wilson, David G.; Robinett, III, Rush D.
2013-09-03
A computer-implemented method of determining the pitch stability of an airfoil system, comprising using a computer to numerically integrate a differential equation of motion that includes terms describing PID controller action. In one model, the differential equation characterizes the time-dependent response of the airfoil's pitch angle, .alpha.. The computer model calculates limit-cycles of the model, which represent the stability boundaries of the airfoil system. Once the stability boundary is known, feedback control can be implemented, by using, for example, a PID controller to control a feedback actuator. The method allows the PID controller gain constants, K.sub.I, K.sub.p, and K.sub.d, to be optimized. This permits operation closer to the stability boundaries, while preventing the physical apparatus from unintentionally crossing the stability boundaries. Operating closer to the stability boundaries permits greater power efficiencies to be extracted from the airfoil system.
NASA Technical Reports Server (NTRS)
1983-01-01
An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.
Light weight portable operator control unit using an Android-enabled mobile phone
NASA Astrophysics Data System (ADS)
Fung, Nicholas
2011-05-01
There have been large gains in the field of robotics, both in hardware sophistication and technical capabilities. However, as more capable robots have been developed and introduced to battlefield environments, the problem of interfacing with human controllers has proven to be challenging. Particularly in the field of military applications, controller requirements can be stringent and can range from size and power consumption, to durability and cost. Traditional operator control units (OCUs) tend to resemble laptop personal computers (PCs), as these devices are mobile and have ample computing power. However, laptop PCs are bulky and have greater power requirements. To approach this problem, a light weight, inexpensive controller was created based on a mobile phone running the Android operating system. It was designed to control an iRobot Packbot through the Army Research Laboratory (ARL) in-house Agile Computing Infrastructure (ACI). The hardware capabilities of the mobile phone, such as Wi- Fi communications, touch screen interface, and the flexibility of the Android operating system, made it a compelling platform. The Android based OCU offers a more portable package and can be easily carried by a soldier along with normal gear requirements. In addition, the one hand operation of the Android OCU allows for the Soldier to keep an unoccupied hand for greater flexibility. To validate the Android OCU as a capable controller, experimental data was collected evaluating use of the controller and a traditional, tablet PC based OCU. Initial analysis suggests that the Android OCU performed positively in qualitative data collected from participants.
Multichannel Phase and Power Detector
NASA Technical Reports Server (NTRS)
Li, Samuel; Lux, James; McMaster, Robert; Boas, Amy
2006-01-01
An electronic signal-processing system determines the phases of input signals arriving in multiple channels, relative to the phase of a reference signal with which the input signals are known to be coherent in both phase and frequency. The system also gives an estimate of the power levels of the input signals. A prototype of the system has four input channels that handle signals at a frequency of 9.5 MHz, but the basic principles of design and operation are extensible to other signal frequencies and greater numbers of channels. The prototype system consists mostly of three parts: An analog-to-digital-converter (ADC) board, which coherently digitizes the input signals in synchronism with the reference signal and performs some simple processing; A digital signal processor (DSP) in the form of a field-programmable gate array (FPGA) board, which performs most of the phase- and power-measurement computations on the digital samples generated by the ADC board; and A carrier board, which allows a personal computer to retrieve the phase and power data. The DSP contains four independent phase-only tracking loops, each of which tracks the phase of one of the preprocessed input signals relative to that of the reference signal (see figure). The phase values computed by these loops are averaged over intervals, the length of which is chosen to obtain output from the DSP at a desired rate. In addition, a simple sum of squares is computed for each channel as an estimate of the power of the signal in that channel. The relative phases and the power level estimates computed by the DSP could be used for diverse purposes in different settings. For example, if the input signals come from different elements of a phased-array antenna, the phases could be used as indications of the direction of arrival of a received signal and/or as feedback for electronic or mechanical beam steering. The power levels could be used as feedback for automatic gain control in preprocessing of incoming signals. For another example, the system could be used to measure the phases and power levels of outputs of multiple power amplifiers to enable adjustment of the amplifiers for optimal power combining.
Power spectrum, correlation function, and tests for luminosity bias in the CfA redshift survey
NASA Technical Reports Server (NTRS)
Park, Changbom; Vogeley, Michael S.; Geller, Margaret J.; Huchra, John P.
1994-01-01
We describe and apply a method for directly computing the power spectrum for the galaxy distribution in the extension of the Center for Astrophysics Redshift Survey. Tests show that our technique accurately reproduces the true power spectrum for k greater than 0.03 h Mpc(exp -1). The dense sampling and large spatial coverage of this survey allow accurate measurement of the redshift-space power spectrum on scales from 5 to approximately 200 h(exp -1) Mpc. The power spectrum has slope n approximately equal -2.1 on small scales (lambda less than or equal 25 h(exp -1) Mpc) and n approximately -1.1 on scales 30 less than lambda less than 120 h(exp -1) Mpc. On larger scales the power spectrum flattens somewhat, but we do not detect a turnover. Comparison with N-body simulations of cosmological models shows that an unbiased, open universe CDM model (OMEGA h = 0.2) and a nonzero cosmological constant (CDM) model (OMEGA h = 0.24, lambda(sub zero) = 0.6, b = 1.3) match the CfA power spectrum over the wavelength range we explore. The standard biased CDM model (OMEGA h = 0.5, b = 1.5) fails (99% significance level) because it has insufficient power on scales lambda greater than 30 h(exp -1) Mpc. Biased CDM with a normalization that matches the Cosmic Microwave Background (CMB) anisotropy (OMEGA h = 0.5, b = 1.4, sigma(sub 8) (mass) = 1) has too much power on small scales to match the observed galaxy power spectrum. This model with b = 1 matches both Cosmic Background Explorer Satellite (COBE) and the small-scale power spect rum but has insufficient power on scales lambda approximately 100 h(exp -1) Mpc. We derive a formula for the effect of small-scale peculiar velocities on the power spectrum and combine this formula with the linear-regime amplification described by Kaiser to compute an estimate of the real-space power spectrum. Two tests reveal luminosity bias in the galaxy distribution: First, the amplitude of the pwer spectrum is approximately 40% larger for the brightest 50% of galaxies in volume-limited samples that have M(sub lim) greater than M*. This bias in the power spectrum is independent of scale, consistent with the peaks-bias paradigm for galaxy formation. Second, the distribution of local density around galaxies shows that regions of moderate and high density contain both very bright (M less than M* = -19.2 + 5 log h) and fainter galaxies, but that voids preferentially harbor fainter galaxies (approximately 2 sigma significance level).
Solid-state Isotopic Power Source for Computer Memory Chips
NASA Technical Reports Server (NTRS)
Brown, Paul M.
1993-01-01
Recent developments in materials technology now make it possible to fabricate nonthermal thin-film radioisotopic energy converters (REC) with a specific power of 24 W/kg and a 10 year working life at 5 to 10 watts. This creates applications never before possible, such as placing the power supply directly on integrated circuit chips. The efficiency of the REC is about 25 percent which is two to three times greater than the 6 to 8 percent capabilities of current thermoelectric systems. Radio isotopic energy converters have the potential to meet many future space power requirements for a wide variety of applications with less mass, better efficiency, and less total area than other power conversion options. These benefits result in significant dollar savings over the projected mission lifetime.
Computer usage and national energy consumption: Results from a field-metering study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desroches, Louis-Benoit; Fuchs, Heidi; Greenblatt, Jeffery
The electricity consumption of miscellaneous electronic loads (MELs) in the home has grown in recent years, and is expected to continue rising. Consumer electronics, in particular, are characterized by swift technological innovation, with varying impacts on energy use. Desktop and laptop computers make up a significant share of MELs electricity consumption, but their national energy use is difficult to estimate, given uncertainties around shifting user behavior. This report analyzes usage data from 64 computers (45 desktop, 11 laptop, and 8 unknown) collected in 2012 as part of a larger field monitoring effort of 880 households in the San Francisco Baymore » Area, and compares our results to recent values from the literature. We find that desktop computers are used for an average of 7.3 hours per day (median = 4.2 h/d), while laptops are used for a mean 4.8 hours per day (median = 2.1 h/d). The results for laptops are likely underestimated since they can be charged in other, unmetered outlets. Average unit annual energy consumption (AEC) for desktops is estimated to be 194 kWh/yr (median = 125 kWh/yr), and for laptops 75 kWh/yr (median = 31 kWh/yr). We estimate national annual energy consumption for desktop computers to be 20 TWh. National annual energy use for laptops is estimated to be 11 TWh, markedly higher than previous estimates, likely reflective of laptops drawing more power in On mode in addition to greater market penetration. This result for laptops, however, carries relatively higher uncertainty compared to desktops. Different study methodologies and definitions, changing usage patterns, and uncertainty about how consumers use computers must be considered when interpreting our results with respect to existing analyses. Finally, as energy consumption in On mode is predominant, we outline several energy savings opportunities: improved power management (defaulting to low-power modes after periods of inactivity as well as power scaling), matching the rated power of power supplies to computing needs, and improving the efficiency of individual components.« less
Annual Industrial Capabilities Report to Congress
2013-10-01
platform concepts for airframe, propulsion, sensors , weapons integration, avionics, and active and passive survivability features will all be explored...for full integration into the National Airspace System. Greater computing power, combined with developments in miniaturization, sensors , and...the design engineering skills for missile propulsion systems is at risk. The Department relies on the viability of a small number of SRM and turbine
Fast Dynamic Simulation-Based Small Signal Stability Assessment and Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acharya, Naresh; Baone, Chaitanya; Veda, Santosh
2014-12-31
Power grid planning and operation decisions are made based on simulation of the dynamic behavior of the system. Enabling substantial energy savings while increasing the reliability of the aging North American power grid through improved utilization of existing transmission assets hinges on the adoption of wide-area measurement systems (WAMS) for power system stabilization. However, adoption of WAMS alone will not suffice if the power system is to reach its full entitlement in stability and reliability. It is necessary to enhance predictability with "faster than real-time" dynamic simulations that will enable the dynamic stability margins, proactive real-time control, and improve gridmore » resiliency to fast time-scale phenomena such as cascading network failures. Present-day dynamic simulations are performed only during offline planning studies, considering only worst case conditions such as summer peak, winter peak days, etc. With widespread deployment of renewable generation, controllable loads, energy storage devices and plug-in hybrid electric vehicles expected in the near future and greater integration of cyber infrastructure (communications, computation and control), monitoring and controlling the dynamic performance of the grid in real-time would become increasingly important. The state-of-the-art dynamic simulation tools have limited computational speed and are not suitable for real-time applications, given the large set of contingency conditions to be evaluated. These tools are optimized for best performance of single-processor computers, but the simulation is still several times slower than real-time due to its computational complexity. With recent significant advances in numerical methods and computational hardware, the expectations have been rising towards more efficient and faster techniques to be implemented in power system simulators. This is a natural expectation, given that the core solution algorithms of most commercial simulators were developed decades ago, when High Performance Computing (HPC) resources were not commonly available.« less
Finding Minimum-Power Broadcast Trees for Wireless Networks
NASA Technical Reports Server (NTRS)
Arabshahi, Payman; Gray, Andrew; Das, Arindam; El-Sharkawi, Mohamed; Marks, Robert, II
2004-01-01
Some algorithms have been devised for use in a method of constructing tree graphs that represent connections among the nodes of a wireless communication network. These algorithms provide for determining the viability of any given candidate connection tree and for generating an initial set of viable trees that can be used in any of a variety of search algorithms (e.g., a genetic algorithm) to find a tree that enables the network to broadcast from a source node to all other nodes while consuming the minimum amount of total power. The method yields solutions better than those of a prior algorithm known as the broadcast incremental power algorithm, albeit at a slightly greater computational cost.
Analytical time-domain Green’s functions for power-law media
Kelly, James F.; McGough, Robert J.; Meerschaert, Mark M.
2008-01-01
Frequency-dependent loss and dispersion are typically modeled with a power-law attenuation coefficient, where the power-law exponent ranges from 0 to 2. To facilitate analytical solution, a fractional partial differential equation is derived that exactly describes power-law attenuation and the Szabo wave equation [“Time domain wave-equations for lossy media obeying a frequency power-law,” J. Acoust. Soc. Am. 96, 491–500 (1994)] is an approximation to this equation. This paper derives analytical time-domain Green’s functions in power-law media for exponents in this range. To construct solutions, stable law probability distributions are utilized. For exponents equal to 0, 1∕3, 1∕2, 2∕3, 3∕2, and 2, the Green’s function is expressed in terms of Dirac delta, exponential, Airy, hypergeometric, and Gaussian functions. For exponents strictly less than 1, the Green’s functions are expressed as Fox functions and are causal. For exponents greater than or equal than 1, the Green’s functions are expressed as Fox and Wright functions and are noncausal. However, numerical computations demonstrate that for observation points only one wavelength from the radiating source, the Green’s function is effectively causal for power-law exponents greater than or equal to 1. The analytical time-domain Green’s function is numerically verified against the material impulse response function, and the results demonstrate excellent agreement. PMID:19045774
Rankine engine solar power generation. I - Performance and economic analysis
NASA Technical Reports Server (NTRS)
Gossler, A. A.; Orrock, J. E.
1981-01-01
Results of a computer simulation of the performance of a solar flat plate collector powered electrical generation system are presented. The simulation was configured to include locations in New Mexico, North Dakota, Tennessee, and Massachusetts, and considered a water-based heat-transfer fluid collector system with storage. The collectors also powered a Rankine-cycle boiler filled with a low temperature working fluid. The generator was considered to be run only when excess solar heat and full storage would otherwise require heat purging through the collectors. All power was directed into the utility grid. The solar powered generator unit addition was found to be dependent on site location and collector area, and reduced the effective solar cost with collector areas greater than 400-670 sq m. The sites were economically ranked, best to worst: New Mexico, North Dakota, Massachusetts, and Tennessee.
Besmann, Anna; Rios, Kimberly
2012-08-01
Previous research has demonstrated the tendency for humans to anthropomorphize computers-that is, to react to computers as social actors, despite knowing that the computers are mere machines. In the present research, we examined the attribution of both primary (non-uniquely human) and secondary (human-like) emotions to ingroup (teammate) and outgroup (opponent) computer-controlled characters in a video game. We found that participants perceived the teammate character as experiencing more secondary emotions than the opponent character, but that they perceived the teammate and opponent character as experiencing equal levels of primary emotions. Thus, participants anthropomorphized the ingroup character to a greater extent than the outgroup character. These results imply that computers' "emotions" are treated with a similar ingroup/outgroup social regard as the emotions of actual humans.
A.J. Tepley; E.A. Thomann
2012-01-01
Recent increases in computation power have prompted enormous growth in the use of simulation models in ecological research. These models are valued for their ability to account for much of the ecological complexity found in field studies, but this ability usually comes at the cost of losing transparency into how the models work. In order to foster greater understanding...
Estimation of the laser cutting operating cost by support vector regression methodology
NASA Astrophysics Data System (ADS)
Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam
2016-09-01
Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.
DMG-α--a computational geometry library for multimolecular systems.
Szczelina, Robert; Murzyn, Krzysztof
2014-11-24
The DMG-α library grants researchers in the field of computational biology, chemistry, and biophysics access to an open-sourced, easy to use, and intuitive software for performing fine-grained geometric analysis of molecular systems. The library is capable of computing power diagrams (weighted Voronoi diagrams) in three dimensions with 3D periodic boundary conditions, computing approximate projective 2D Voronoi diagrams on arbitrarily defined surfaces, performing shape properties recognition using α-shape theory and can do exact Solvent Accessible Surface Area (SASA) computation. The software is written mainly as a template-based C++ library for greater performance, but a rich Python interface (pydmga) is provided as a convenient way to manipulate the DMG-α routines. To illustrate possible applications of the DMG-α library, we present results of sample analyses which allowed to determine nontrivial geometric properties of two Escherichia coli-specific lipids as emerging from molecular dynamics simulations of relevant model bilayers.
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
Capacity of a direct detection optical communication channel
NASA Technical Reports Server (NTRS)
Tan, H. H.
1980-01-01
The capacity of a free space optical channel using a direct detection receiver is derived under both peak and average signal power constraints and without a signal bandwidth constraint. The addition of instantaneous noiseless feedback from the receiver to the transmitter does not increase the channel capacity. In the absence of received background noise, an optimally coded PPM system is shown to achieve capacity in the limit as signal bandwidth approaches infinity. In the case of large peak to average signal power ratios, an interleaved coding scheme with PPM modulation is shown to have a computational cutoff rate far greater than ordinary coding schemes.
Farabi, Sarah S; Prasad, Bharati; Quinn, Lauretta; Carley, David W
2014-01-15
To determine the effects of dronabinol on quantitative electroencephalogram (EEG) markers of the sleep process, including power distribution and ultradian cycling in 15 patients with obstructive sleep apnea (OSA). EEG (C4-A1) relative power (% total) in the delta, theta, alpha, and sigma bands was quantified by fast Fourier transformation (FFT) over 28-second intervals. An activation ratio (AR = [alpha + sigma] / [delta + theta]) also was computed for each interval. To assess ultradian rhythms, the best-fitting cosine wave was determined for AR and each frequency band in each polysomnogram (PSG). Fifteen subjects were included in the analysis. Dronabinol was associated with significantly increased theta power (p = 0.002). During the first half of the night, dronabinol decreased sigma power (p = 0.03) and AR (p = 0.03), and increased theta power (p = 0.0006). At increasing dronabinol doses, ultradian rhythms accounted for a greater fraction of EEG power variance in the delta band (p = 0.04) and AR (p = 0.03). Females had higher amplitude ultradian rhythms than males (theta: p = 0.01; sigma: p = 0.01). Decreasing AHI was associated with increasing ultradian rhythm amplitudes (sigma: p < 0.001; AR: p = 0.02). At the end of treatment, lower relative power in the theta band (p = 0.02) and lower AHI (p = 0.05) correlated with a greater decrease in sleepiness from baseline. This exploratory study demonstrates that in individuals with OSA, dronabinol treatment may yield a shift in EEG power toward delta and theta frequencies and a strengthening of ultradian rhythms in the sleep EEG.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curto, Sergio; Taj-Eldin, Mohammed; Fairchild, Dillon
Purpose: The relationship between microwave ablation system operating frequency and ablation performance is not currently well understood. The objective of this study was to comparatively assess the differences in microwave ablation at 915 MHz and 2.45 GHz. Methods: Analytical expressions for electromagnetic radiation from point sources were used to compare power deposition at the two frequencies of interest. A 3D electromagnetic-thermal bioheat transfer solver was implemented with the finite element method to characterize power deposition and thermal ablation with asymmetrical insulated dipole antennas (single-antenna and dual-antenna synchronous arrays). Simulation results were validated against experiments in ex vivo tissue. Results: Theoretical,more » computational, and experimental results indicated greater power deposition and larger diameter ablation zones when using a single insulated microwave antenna at 2.45 GHz; experimentally, 32 ± 4.1 mm and 36.3 ± 1.0 mm for 5 and 10 min, respectively, at 2.45 GHz, compared to 24 ± 1.7 mm and 29.5 ± 0.6 mm at 915 MHz, with 30 W forward power at the antenna input port. In experiments, faster heating was observed at locations 5 mm (0.91 vs 0.49 °C/s) and 10 mm (0.28 vs 0.15 °C/s) from the antenna operating at 2.45 GHz. Larger ablation zones were observed with dual-antenna arrays at 2.45 GHz; however, the differences were less pronounced than for single antennas. Conclusions: Single- and dual-antenna arrays systems operating at 2.45 GHz yield larger ablation zone due to greater power deposition in proximity to the antenna, as well as greater role of thermal conduction.« less
NASA Astrophysics Data System (ADS)
Leung, E. M. W.; Bailey, R. E.; Michels, P. H.
1989-03-01
The hybrid pulse power transformer (HPPT) is a unique concept utilizing the ultrafast superconducting-to-normal transition process of a superconductor. When used in the form of a hybrid transformer current-zero switch (HTCS), this creates an approach in which the large, high-power, high-current opening switch in a conventional railgun system can be eliminated. This represents an innovative application of superconductivity to pulsed power conditioning required for the Strategic Defense Initiative (SDI). The authors explain the working principles of a 100-KJ unit capable of switching up to 500 kA at a frequency of 0.5 Hz and with a system efficiency of greater than 90 percent. Circuit analysis using a computer code called SPICE PLUS was used to verify the HTCS concept. This concept can be scaled up to applications in the several mega-ampere levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. E. Lawson, R. Marsala, S. Ramakrishnan, X. Zhao, P. Sichta
In order to provide improved and expanded experimental capabilities, the existing Transrex power supplies at PPPL are to be upgraded and modernized. Each of the 39 power supplies consists of two six pulse silicon controlled rectifier sections forming a twelve pulse power supply. The first modification is to split each supply into two independent six pulse supplies by replacing the existing obsolete twelve pulse firing generator with two commercially available six pulse firing generators. The second change replaces the existing control link with a faster system, with greater capacity, which will allow for independent control of all 78 power supplymore » sections. The third change replaces the existing Computer Automated Measurement and Control (CAMAC) based fault detector with an Experimental Physics and Industrial Control System (EPICS) compatible unit, eliminating the obsolete CAMAC modules. Finally the remaining relay logic and interfaces to the "Hardwired Control System" will be replaces with a Programmable Logic Controller (PLC).« less
Earth Science Informatics Comes of Age
NASA Technical Reports Server (NTRS)
Jodha, Siri; Khalsa, S.; Ramachandran, Rahul
2014-01-01
The volume and complexity of Earth science data have steadily increased, placing ever-greater demands on researchers, software developers and data managers tasked with handling such data. Additional demands arise from requirements being levied by funding agencies and governments to better manage, preserve and provide open access to data. Fortunately, over the past 10-15 years significant advances in information technology, such as increased processing power, advanced programming languages, more sophisticated and practical standards, and near-ubiquitous internet access have made the jobs of those acquiring, processing, distributing and archiving data easier. These advances have also led to an increasing number of individuals entering the field of informatics as it applies to Geoscience and Remote Sensing. Informatics is the science and technology of applying computers and computational methods to the systematic analysis, management, interchange, and representation of data, information, and knowledge. Informatics also encompasses the use of computers and computational methods to support decisionmaking and other applications for societal benefits.
CFD Analysis of Emissions for a Candidate N+3 Combustor
NASA Technical Reports Server (NTRS)
Ajmani, Kumud
2015-01-01
An effort was undertaken to analyze the performance of a model Lean-Direct Injection (LDI) combustor designed to meet emissions and performance goals for NASA's N+3 program. Computational predictions of Emissions Index (EINOx) and combustor exit temperature were obtained for operation at typical power conditions expected of a small-core, high pressure-ratio (greater than 50), high T3 inlet temperature (greater than 950K) N+3 combustor. Reacting-flow computations were performed with the National Combustion Code (NCC) for a model N+3 LDI combustor, which consisted of a nine-element LDI flame-tube derived from a previous generation (N+2) thirteen-element LDI design. A consistent approach to mesh-optimization, spraymodeling and kinetics-modeling was used, in order to leverage the lessons learned from previous N+2 flame-tube analysis with the NCC. The NCC predictions for the current, non-optimized N+3 combustor operating indicated a 74% increase in NOx emissions as compared to that of the emissions-optimized, parent N+2 LDI combustor.
CFD Analysis of Emissions for a Candidate N+3 Combustor
NASA Technical Reports Server (NTRS)
Ajmani, Kumud
2015-01-01
An effort was undertaken to analyze the performance of a model Lean-Direct Injection (LDI) combustor designed to meet emissions and performance goals for NASA's N+3 program. Computational predictions of Emissions Index (EINOx) and combustor exit temperature were obtained for operation at typical power conditions expected of a small-core, high pressure-ratio (greater than 50), high T3 inlet temperature (greater than 950K) N+3 combustor. Reacting-flow computations were performed with the National Combustion Code (NCC) for a model N+3 LDI combustor, which consisted of a nine-element LDI flame-tube derived from a previous generation (N+2) thirteen-element LDI design. A consistent approach to mesh-optimization, spray-modeling and kinetics-modeling was used, in order to leverage the lessons learned from previous N+2 flame-tube analysis with the NCC. The NCC predictions for the current, non-optimized N+3 combustor operating indicated a 74% increase in NOx emissions as compared to that of the emissions-optimized, parent N+2 LDI combustor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paret, Paul P; DeVoto, Douglas J; Narumanchi, Sreekant V
Sintered silver has proven to be a promising candidate for use as a die-attach and substrate-attach material in automotive power electronics components. It holds promise of greater reliability than lead-based and lead-free solders, especially at higher temperatures (less than 200 degrees Celcius). Accurate predictive lifetime models of sintered silver need to be developed and its failure mechanisms thoroughly characterized before it can be deployed as a die-attach or substrate-attach material in wide-bandgap device-based packages. We present a finite element method (FEM) modeling methodology that can offer greater accuracy in predicting the failure of sintered silver under accelerated thermal cycling. Amore » fracture mechanics-based approach is adopted in the FEM model, and J-integral/thermal cycle values are computed. In this paper, we outline the procedures for obtaining the J-integral/thermal cycle values in a computational model and report on the possible advantage of using these values as modeling parameters in a predictive lifetime model.« less
Power throttling of collections of computing elements
Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY
2011-08-16
An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.
Ultrasonic power measurement system based on acousto-optic interaction.
He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan
2016-05-01
Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.
Ultrasonic power measurement system based on acousto-optic interaction
NASA Astrophysics Data System (ADS)
He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan
2016-05-01
Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.
Davis, J P; Akella, S; Waddell, P H
2004-01-01
Having greater computational power on the desktop for processing taxa data sets has been a dream of biologists/statisticians involved in phylogenetics data analysis. Many existing algorithms have been highly optimized-one example being Felsenstein's PHYLIP code, written in C, for UPGMA and neighbor joining algorithms. However, the ability to process more than a few tens of taxa in a reasonable amount of time using conventional computers has not yielded a satisfactory speedup in data processing, making it difficult for phylogenetics practitioners to quickly explore data sets-such as might be done from a laptop computer. We discuss the application of custom computing techniques to phylogenetics. In particular, we apply this technology to speed up UPGMA algorithm execution by a factor of a hundred, against that of PHYLIP code running on the same PC. We report on these experiments and discuss how custom computing techniques can be used to not only accelerate phylogenetics algorithm performance on the desktop, but also on larger, high-performance computing engines, thus enabling the high-speed processing of data sets involving thousands of taxa.
Relationships between Mechanical Variables in the Traditional and Close-Grip Bench Press.
Lockie, Robert G; Callaghan, Samuel J; Moreno, Matthew R; Risso, Fabrice G; Liu, Tricia M; Stage, Alyssa A; Birmingham-Babauta, Samantha A; Stokes, John J; Giuliano, Dominic V; Lazar, Adrina; Davis, DeShaun L; Orjalo, Ashley J
2017-12-01
The study aim was to determine relationships between mechanical variables in the one-repetition maximum (1RM) traditional bench press (TBP) and close-grip bench press (CGBP). Twenty resistance-trained men completed a TBP and CGBP 1RM. The TBP was performed with the preferred grip; the CGBP with a grip width of 95% biacromial distance. A linear position transducer measured: lift distance and duration; work; and peak and mean power, velocity, and force. Paired samples t-tests (p < 0.05) compared the 1RM and mechanical variables for the TBP and CGBP; effect sizes (d) were also calculated. Pearson's correlations (r; p < 0.05) computed relationships between the TBP and CGBP. 1RM, lift duration, and mean force were greater in the TBP (d = 0.30-3.20). Peak power and velocity was greater for the CGBP (d = 0.50-1.29). The 1RM TBP correlated with CGBP 1RM, power, and force (r = 0.685-0.982). TBP work correlated with CGBP 1RM, lift distance, power, force, and work (r = 0.542-0.931). TBP power correlated with CGBP 1RM, power, force, velocity, and work (r = 0.484-0.704). TBP peak and mean force related to CGBP 1RM, power, and force (r = 0.596-0.980). Due to relationships between the load, work, power, and force for the TBP and CGBP, the CGBP could provide similar strength adaptations to the TBP with long-term use. The velocity profile for the CGBP was different to that of the TBP. The CGBP could be used specifically to improve high-velocity, upper-body pushing movements.
Relationships between Mechanical Variables in the Traditional and Close-Grip Bench Press
Callaghan, Samuel J.; Moreno, Matthew R.; Risso, Fabrice G.; Liu, Tricia M.; Stage, Alyssa A.; Birmingham-Babauta, Samantha A.; Stokes, John J.; Giuliano, Dominic V.; Lazar, Adrina; Davis, DeShaun L.; Orjalo, Ashley J.
2017-01-01
Abstract The study aim was to determine relationships between mechanical variables in the one-repetition maximum (1RM) traditional bench press (TBP) and close-grip bench press (CGBP). Twenty resistance-trained men completed a TBP and CGBP 1RM. The TBP was performed with the preferred grip; the CGBP with a grip width of 95% biacromial distance. A linear position transducer measured: lift distance and duration; work; and peak and mean power, velocity, and force. Paired samples t-tests (p < 0.05) compared the 1RM and mechanical variables for the TBP and CGBP; effect sizes (d) were also calculated. Pearson’s correlations (r; p < 0.05) computed relationships between the TBP and CGBP. 1RM, lift duration, and mean force were greater in the TBP (d = 0.30-3.20). Peak power and velocity was greater for the CGBP (d = 0.50-1.29). The 1RM TBP correlated with CGBP 1RM, power, and force (r = 0.685-0.982). TBP work correlated with CGBP 1RM, lift distance, power, force, and work (r = 0.542-0.931). TBP power correlated with CGBP 1RM, power, force, velocity, and work (r = 0.484-0.704). TBP peak and mean force related to CGBP 1RM, power, and force (r = 0.596-0.980). Due to relationships between the load, work, power, and force for the TBP and CGBP, the CGBP could provide similar strength adaptations to the TBP with long-term use. The velocity profile for the CGBP was different to that of the TBP. The CGBP could be used specifically to improve high-velocity, upper-body pushing movements. PMID:29339982
Operator performance and localized muscle fatigue in a simulated space vehicle control task
NASA Technical Reports Server (NTRS)
Lewis, J. L., Jr.
1979-01-01
Fourier transforms in a special purpose computer were utilized to obtain power spectral density functions from electromyograms of the biceps brachii, triceps brachii, brachioradialis, flexor carpi ulnaris, brachialis, and pronator teres in eight subjects performing isometric tracking tasks in two directions utilizing a prototype spacecraft rotational hand controller. Analysis of these spectra in general purpose computers aided in defining muscles involved in performing the task, and yielded a derived measure potentially useful in predicting task termination. The triceps was the only muscle to show significant differences in all possible tests for simple effects in both tasks and, overall, was the most consistently involved of the six muscles. The total power monitored for triceps, biceps, and brachialis dropped to minimal levels across all subjects earlier than for other muscles. However, smaller variances existed for the biceps, brachioradialis, brachialis, and flexor carpi ulnaris muscles and could provide longer predictive times due to smaller standard deviations for a greater population range.
Irastorza, Ramiro M; d'Avila, Andre; Berjano, Enrique
2018-02-01
The use of ultra-short RF pulses could achieve greater lesion depth immediately after the application of the pulse due to thermal latency. A computer model of irrigated-catheter RF ablation was built to study the impact of thermal latency on the lesion depth. The results showed that the shorter the RF pulse duration (keeping energy constant), the greater the lesion depth during the cooling phase. For instance, after a 10-second pulse, lesion depth grew from 2.05 mm at the end of the pulse to 2.39 mm (17%), while after an ultra-short RF pulse of only 1 second the extra growth was 37% (from 2.22 to 3.05 mm). Importantly, short applications resulted in deeper lesions than long applications (3.05 mm vs. 2.39 mm, for 1- and 10-second pulse, respectively). While shortening the pulse duration produced deeper lesions, the associated increase in applied voltage caused overheating in the tissue: temperatures around 100 °C were reached at a depth of 1 mm in the case of 1- and 5-second pulses. However, since the lesion depth increased during the cooling period, lower values of applied voltage could be applied in short durations in order to obtain lesion depths similar to those in longer durations while avoiding overheating. The thermal latency phenomenon seems to be the cause of significantly greater lesion depth after short-duration high-power RF pulses. Balancing the applied total energy when the voltage and duration are changed is not the optimal strategy since short pulses can also cause overheating. © 2017 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Liping; Zhu, Fulong, E-mail: zhufulong@hust.edu.cn; Duan, Ke
Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of opticalmore » devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.« less
Profiling an application for power consumption during execution on a compute node
Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E
2013-09-17
Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.
The future of fMRI in cognitive neuroscience.
Poldrack, Russell A
2012-08-15
Over the last 20 years, fMRI has revolutionized cognitive neuroscience. Here I outline a vision for what the next 20 years of fMRI in cognitive neuroscience might look like. Some developments that I hope for include increased methodological rigor, an increasing focus on connectivity and pattern analysis as opposed to "blobology", a greater focus on selective inference powered by open databases, and increased use of ontologies and computational models to describe underlying processes. Copyright © 2011 Elsevier Inc. All rights reserved.
Proton Straggling in Thick Silicon Detectors
NASA Technical Reports Server (NTRS)
Selesnick, R. S.; Baker, D. N.; Kanekal, S. G.
2017-01-01
Straggling functions for protons in thick silicon radiation detectors are computed by Monte Carlo simulation. Mean energy loss is constrained by the silicon stopping power, providing higher straggling at low energy and probabilities for stopping within the detector volume. By matching the first four moments of simulated energy-loss distributions, straggling functions are approximated by a log-normal distribution that is accurate for Vavilov k is greater than or equal to 0:3. They are verified by comparison to experimental proton data from a charged particle telescope.
Block Oriented Simulation System (BOSS)
NASA Technical Reports Server (NTRS)
Ratcliffe, Jaimie
1988-01-01
Computer simulation is assuming greater importance as a flexible and expedient approach to modeling system and subsystem behavior. Simulation has played a key role in the growth of complex, multiple access space communications such as those used by the space shuttle and the TRW-built Tracking and Data Relay Satellites (TDRS). A powerful new simulator for use in designing and modeling the communication system of NASA's planned Space Station is being developed. Progress to date on the Block (Diagram) Oriented Simulation System (BOSS) is described.
Evaluation of seepage and discharge uncertainty in the middle Snake River, southwestern Idaho
Wood, Molly S.; Williams, Marshall L.; Evetts, David M.; Vidmar, Peter J.
2014-01-01
The U.S. Geological Survey, in cooperation with the State of Idaho, Idaho Power Company, and the Idaho Department of Water Resources, evaluated seasonal seepage gains and losses in selected reaches of the middle Snake River, Idaho, during November 2012 and July 2013, and uncertainty in measured and computed discharge at four Idaho Power Company streamgages. Results from this investigation will be used by resource managers in developing a protocol to calculate and report Adjusted Average Daily Flow at the Idaho Power Company streamgage on the Snake River below Swan Falls Dam, near Murphy, Idaho, which is the measurement point for distributing water to owners of hydropower and minimum flow water rights in the middle Snake River. The evaluated reaches of the Snake River were from King Hill to Murphy, Idaho, for the seepage studies and downstream of Lower Salmon Falls Dam to Murphy, Idaho, for evaluations of discharge uncertainty. Computed seepage was greater than cumulative measurement uncertainty for subreaches along the middle Snake River during November 2012, the non-irrigation season, but not during July 2013, the irrigation season. During the November 2012 seepage study, the subreach between King Hill and C J Strike Dam had a meaningful (greater than cumulative measurement uncertainty) seepage gain of 415 cubic feet per second (ft3/s), and the subreach between Loveridge Bridge and C J Strike Dam had a meaningful seepage gain of 217 ft3/s. The meaningful seepage gain measured in the November 2012 seepage study was expected on the basis of several small seeps and springs present along the subreach, regional groundwater table contour maps, and results of regional groundwater flow model simulations. Computed seepage along the subreach from C J Strike Dam to Murphy was less than cumulative measurement uncertainty during November 2012 and July 2013; therefore, seepage cannot be quantified with certainty along this subreach. For the uncertainty evaluation, average uncertainty in discharge measurements at the four Idaho Power Company streamgages in the study reach ranged from 4.3 percent (Snake River below Lower Salmon Falls Dam) to 7.8 percent (Snake River below C J Strike Dam) for discharges less than 7,000 ft3/s in water years 2007–11. This range in uncertainty constituted most of the total quantifiable uncertainty in computed discharge, represented by prediction intervals calculated from the discharge rating of each streamgage. Uncertainty in computed discharge in the Snake River below Swan Falls Dam near Murphy was 10.1 and 6.0 percent at the Adjusted Average Daily Flow thresholds of 3,900 and 5,600 ft3/s, respectively. All discharge measurements and records computed at streamgages have some level of uncertainty that cannot be entirely eliminated. Knowledge of uncertainty at the Adjusted Average Daily Flow thresholds is useful for developing a measurement and reporting protocol for purposes of distributing water to hydropower and minimum flow water rights in the middle Snake River.
Profiling an application for power consumption during execution on a plurality of compute nodes
Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.
2012-08-21
Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.
The use of wireless laptop computers for computer-assisted learning in pharmacokinetics.
Munar, Myrna Y; Singh, Harleen; Belle, Donna; Brackett, Carolyn C; Earle, Sandra B
2006-02-15
To implement computer-assisted learning workshops into pharmacokinetics courses in a doctor of pharmacy (PharmD) program. Workshops were designed for students to utilize computer software programs on laptop computers to build pharmacokinetic models to predict drug concentrations resulting from various dosage regimens. In addition, students were able to visualize through graphing programs how altering different parameters changed drug concentration-time curves. Surveys were conducted to measure students' attitudes toward computer technology before and after implementation. Finally, traditional examinations were used to evaluate student learning. Doctor of pharmacy students responded favorably to the use of wireless laptop computers in problem-based pharmacokinetic workshops. Eighty-eight percent (n = 61/69) and 82% (n = 55/67) of PharmD students completed surveys before and after computer implementation, respectively. Prior to implementation, 95% of students agreed that computers would enhance learning in pharmacokinetics. After implementation, 98% of students strongly agreed (p < 0.05) that computers enhanced learning. Examination results were significantly higher after computer implementation (89% with computers vs. 84% without computers; p = 0.01). Implementation of wireless laptop computers in a pharmacokinetic course enabled students to construct their own pharmacokinetic models that could respond to changing parameters. Students had greater comprehension and were better able to interpret results and provide appropriate recommendations. Computer-assisted pharmacokinetic techniques can be powerful tools when making decisions about drug therapy.
The Use of Wireless Laptop Computers for Computer-Assisted Learning in Pharmacokinetics
Munar, Myrna Y.; Singh, Harleen; Belle, Donna; Brackett, Carolyn C.; Earle, Sandra B.
2006-01-01
Objective To implement computer-assisted learning workshops into pharmacokinetics courses in a doctor of pharmacy (PharmD) program. Design Workshops were designed for students to utilize computer software programs on laptop computers to build pharmacokinetic models to predict drug concentrations resulting from various dosage regimens. In addition, students were able to visualize through graphing programs how altering different parameters changed drug concentration-time curves. Surveys were conducted to measure students’ attitudes toward computer technology before and after implementation. Finally, traditional examinations were used to evaluate student learning. Assessment Doctor of pharmacy students responded favorably to the use of wireless laptop computers in problem-based pharmacokinetic workshops. Eighty-eight percent (n = 61/69) and 82% (n = 55/67) of PharmD students completed surveys before and after computer implementation, respectively. Prior to implementation, 95% of students agreed that computers would enhance learning in pharmacokinetics. After implementation, 98% of students strongly agreed (p < 0.05) that computers enhanced learning. Examination results were significantly higher after computer implementation (89% with computers vs. 84% without computers; p = 0.01). Conclusion Implementation of wireless laptop computers in a pharmacokinetic course enabled students to construct their own pharmacokinetic models that could respond to changing parameters. Students had greater comprehension and were better able to interpret results and provide appropriate recommendations. Computer-assisted pharmacokinetic techniques can be powerful tools when making decisions about drug therapy. PMID:17136147
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Shawn
This software enables the user to produce Google Earth visualizations of turbine wake effects for wind farms. The visualizations are based on computations of statistical quantities that vary with wind direction and help quantify the effects on power production of upwind turbines on turbines in their wakes. The results of the software are plot images and kml files that can be loaded into Google Earth. The statistics computed are described in greater detail in the paper: S. Martin, C. H. Westergaard, and J. White (2016), Visualizing Wind Farm Wakes Using SCADA Data, in Wither Turbulence and Big Data in themore » 21st Century? Eds. A. Pollard, L. Castillo, L. Danaila, and M. Glauser. Springer, pgs. 231-254.« less
Personalized keystroke dynamics for self-powered human--machine interfacing.
Chen, Jun; Zhu, Guang; Yang, Jin; Jing, Qingshen; Bai, Peng; Yang, Weiqing; Qi, Xuewei; Su, Yuanjie; Wang, Zhong Lin
2015-01-27
The computer keyboard is one of the most common, reliable, accessible, and effective tools used for human--machine interfacing and information exchange. Although keyboards have been used for hundreds of years for advancing human civilization, studying human behavior by keystroke dynamics using smart keyboards remains a great challenge. Here we report a self-powered, non-mechanical-punching keyboard enabled by contact electrification between human fingers and keys, which converts mechanical stimuli applied to the keyboard into local electronic signals without applying an external power. The intelligent keyboard (IKB) can not only sensitively trigger a wireless alarm system once gentle finger tapping occurs but also trace and record typed content by detecting both the dynamic time intervals between and during the inputting of letters and the force used for each typing action. Such features hold promise for its use as a smart security system that can realize detection, alert, recording, and identification. Moreover, the IKB is able to identify personal characteristics from different individuals, assisted by the behavioral biometric of keystroke dynamics. Furthermore, the IKB can effectively harness typing motions for electricity to charge commercial electronics at arbitrary typing speeds greater than 100 characters per min. Given the above features, the IKB can be potentially applied not only to self-powered electronics but also to artificial intelligence, cyber security, and computer or network access control.
Reducing power consumption during execution of an application on a plurality of compute nodes
Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.
2013-09-10
Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.
Brayton Power Conversion System Parametric Design Modelling for Nuclear Electric Propulsion
NASA Technical Reports Server (NTRS)
Ashe, Thomas L.; Otting, William D.
1993-01-01
The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.
Automated strip-mine and reclamation mapping from ERTS
NASA Technical Reports Server (NTRS)
Rogers, R. H. (Principal Investigator); Reed, L. E.; Pettyjohn, W. A.
1974-01-01
The author has identified the following significant results. Computer processing techniques were applied to ERTS-1 computer-compatible tape (CCT) data acquired in August 1972 on the Ohio Power Company's coal mining operation in Muskingum County, Ohio. Processing results succeeded in automatically classifying, with an accuracy greater than 90%: (1) stripped earth and major sources of erosion; (2) partially reclaimed areas and minor sources of erosion; (3) water with sedimentation; (4) water without sedimentation; and (5) vegetation. Computer-generated tables listing the area in acres and square kilometers were produced for each target category. Processing results also included geometrically corrected map overlays, one for each target category, drawn on a transparent material by a pen under computer control. Each target category is assigned a distinctive color on the overlay to facilitate interpretation. The overlays, drawn at a scale of 1:250,000 when placed over an AMS map of the same area, immediately provided map locations for each target. These mapping products were generated at a tenth of the cost of conventional mapping techniques.
Electrocortical activity distinguishes between uphill and level walking in humans.
Bradford, J Cortney; Lukos, Jamie R; Ferris, Daniel P
2016-02-01
The objective of this study was to determine if electrocortical activity is different between walking on an incline compared with level surface. Subjects walked on a treadmill at 0% and 15% grades for 30 min while we recorded electroencephalography (EEG). We used independent component (IC) analysis to parse EEG signals into maximally independent sources and then computed dipole estimations for each IC. We clustered cortical source ICs and analyzed event-related spectral perturbations synchronized to gait events. Theta power fluctuated across the gait cycle for both conditions, but was greater during incline walking in the anterior cingulate, sensorimotor and posterior parietal clusters. We found greater gamma power during level walking in the left sensorimotor and anterior cingulate clusters. We also found distinct alpha and beta fluctuations, depending on the phase of the gait cycle for the left and right sensorimotor cortices, indicating cortical lateralization for both walking conditions. We validated the results by isolating movement artifact. We found that the frequency activation patterns of the artifact were different than the actual EEG data, providing evidence that the differences between walking conditions were cortically driven rather than a residual artifact of the experiment. These findings suggest that the locomotor pattern adjustments necessary to walk on an incline compared with level surface may require supraspinal input, especially from the left sensorimotor cortex, anterior cingulate, and posterior parietal areas. These results are a promising step toward the use of EEG as a feed-forward control signal for ambulatory brain-computer interface technologies.
Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service
NASA Astrophysics Data System (ADS)
Cameron, D.; Filipčič, A.; Guan, W.; Tsulaia, V.; Walker, R.; Wenaus, T.;
2017-10-01
With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.
Lippert, Christoph; Xiang, Jing; Horta, Danilo; Widmer, Christian; Kadie, Carl; Heckerman, David; Listgarten, Jennifer
2014-01-01
Motivation: Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test—a score test—with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene–gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. Results: After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test—up to 23 more associations—whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12% more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25% more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene–gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Availability: Software available at http://research.microsoft.com/en-us/um/redmond/projects/MSCompBio/Fastlmm/. Contact: heckerma@microsoft.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25075117
Lippert, Christoph; Xiang, Jing; Horta, Danilo; Widmer, Christian; Kadie, Carl; Heckerman, David; Listgarten, Jennifer
2014-11-15
Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test-a score test-with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene-gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test-up to 23 more associations-whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12% more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25% more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene-gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Software available at http://research.microsoft.com/en-us/um/redmond/projects/MSCompBio/Fastlmm/. heckerma@microsoft.com Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
AA9int: SNP Interaction Pattern Search Using Non-Hierarchical Additive Model Set.
Lin, Hui-Yi; Huang, Po-Yu; Chen, Dung-Tsa; Tung, Heng-Yuan; Sellers, Thomas A; Pow-Sang, Julio; Eeles, Rosalind; Easton, Doug; Kote-Jarai, Zsofia; Amin Al Olama, Ali; Benlloch, Sara; Muir, Kenneth; Giles, Graham G; Wiklund, Fredrik; Gronberg, Henrik; Haiman, Christopher A; Schleutker, Johanna; Nordestgaard, Børge G; Travis, Ruth C; Hamdy, Freddie; Neal, David E; Pashayan, Nora; Khaw, Kay-Tee; Stanford, Janet L; Blot, William J; Thibodeau, Stephen N; Maier, Christiane; Kibel, Adam S; Cybulski, Cezary; Cannon-Albright, Lisa; Brenner, Hermann; Kaneva, Radka; Batra, Jyotsna; Teixeira, Manuel R; Pandha, Hardev; Lu, Yong-Jie; Park, Jong Y
2018-06-07
The use of single nucleotide polymorphism (SNP) interactions to predict complex diseases is getting more attention during the past decade, but related statistical methods are still immature. We previously proposed the SNP Interaction Pattern Identifier (SIPI) approach to evaluate 45 SNP interaction patterns/patterns. SIPI is statistically powerful but suffers from a large computation burden. For large-scale studies, it is necessary to use a powerful and computation-efficient method. The objective of this study is to develop an evidence-based mini-version of SIPI as the screening tool or solitary use and to evaluate the impact of inheritance mode and model structure on detecting SNP-SNP interactions. We tested two candidate approaches: the 'Five-Full' and 'AA9int' method. The Five-Full approach is composed of the five full interaction models considering three inheritance modes (additive, dominant and recessive). The AA9int approach is composed of nine interaction models by considering non-hierarchical model structure and the additive mode. Our simulation results show that AA9int has similar statistical power compared to SIPI and is superior to the Five-Full approach, and the impact of the non-hierarchical model structure is greater than that of the inheritance mode in detecting SNP-SNP interactions. In summary, it is recommended that AA9int is a powerful tool to be used either alone or as the screening stage of a two-stage approach (AA9int+SIPI) for detecting SNP-SNP interactions in large-scale studies. The 'AA9int' and 'parAA9int' functions (standard and parallel computing version) are added in the SIPI R package, which is freely available at https://linhuiyi.github.io/LinHY_Software/. hlin1@lsuhsc.edu. Supplementary data are available at Bioinformatics online.
Reducing power consumption during execution of an application on a plurality of compute nodes
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-06-05
Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.
McFee, R H
1975-07-01
The effects of random waviness, curvature, and tracking error of plane-mirror heliostats in a rectangular array around a central-receiver solar power system are determined by subdividing each mirror into 484 elements, assuming the slope of each element to be representative of the surface slope average at its location, and summing the contributions of all elements and then of all mirrors in the array. Total received power and flux density distribution are computed for a given sun location and set of array parameter values. Effects of shading and blocking by adjacent mirrors are included in the calculation. Alt-azimuth mounting of the heliostats is assumed. Representative curves for two receiver diameters and two sun locations indicate a power loss of 20% for random waviness, curvature, and tracking error of 0.1 degrees rms, 0.002 m(-1), and 0.5 degrees , 3sigma, respectively, for an 18.2-m diam receiver and 0.3 degrees rms, 0.005 m(-1), and greater than 1 degrees , respectively, for a 30.4-m diam receiver.
NASA Astrophysics Data System (ADS)
Orhan, K.; Mayerle, R.
2016-12-01
A methodology comprising of the estimates of power yield, evaluation of the effects of power extraction on flow conditions, and near-field investigations to deliver wake characteritics, recovery and interactions is described and applied to several straits in Indonesia. Site selection is done with high-resolution, three-dimensional flow models providing sufficient spatiotemporal coverage. Much attention has been given to the meteorological forcing, and conditions at the open sea boundaries to adequately capture the density gradients and flow fields. Model verification using tidal records shows excellent agreement. Sites with adequate depth for the energy conversion using horizontal axis tidal turbines, average kinetic power density greater than 0.5 kW/m2, and surface area larger than 0.5km2 are defined as energy hotspots. Spatial variation of the average extractable electric power is determined, and annual tidal energy resource is estimated for the straits in question. The results showed that the potential for tidal power generation in Indonesia is likely to exceed previous predictions reaching around 4,800MW. To assess the impact of the devices, flexible mesh models with higher resolutions have been developed. Effects on flow conditions, and near-field turbine wakes are resolved in greater detail with triangular horizontal grids. The energy is assumed to be removed uniformly by sub-grid scale arrays of turbines, and calculations are made based on velocities at the hub heights of the devices. An additional drag force resulting in dissipation of the pre-existing kinetic power from %10 to %60 within a flow cross-section is introduced to capture the impacts. It was found that the effect of power extraction on water levels and flow speeds in adjacent areas is not significant. Results show the effectivess of the method to capture wake characteritics and recovery reasonably well with low computational cost.
Computational methods in drug discovery
Leelananda, Sumudu P
2016-01-01
The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed. PMID:28144341
Computational methods in drug discovery.
Leelananda, Sumudu P; Lindert, Steffen
2016-01-01
The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein-ligand docking, pharmacophore modeling and QSAR techniques are reviewed.
NASA Technical Reports Server (NTRS)
Kast, J. R.
1988-01-01
The Upper Atmosphere Research Satellite (UARS) is a three-axis stabilized Earth-pointing spacecraft in a low-Earth orbit. The UARS onboard computer (OBC) uses a Fourier Power Series (FPS) ephemeris representation that includes 42 position and 42 velocity coefficients per axis, with position residuals at 10-minute intervals. New coefficients and 32 hours of residuals are uploaded daily. This study evaluated two backup methods that permit the OBC to compute an approximate spacecraft ephemeris in the event that new ephemeris data cannot be uplinked for several days: (1) extending the use of the FPS coefficients previously uplinked, and (2) switching to a simple circular orbit approximation designed and tested (but not implemented) for LANDSAT-D. The FPS method provides greater accuracy during the backup period and does not require additional ground operational procedures for generating and uplinking an additional ephemeris table. The tradeoff is that the high accuracy of the FPS will be degraded slightly by adopting the longer fit period necessary to obtain backup accuracy for an extended period of time. The results for UARS show that extended use of the FPS is superior to the circular orbit approximation for short-term ephemeris backup.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-01-10
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-04-17
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
On quantum models of the human mind.
Wang, Hongbin; Sun, Yanlong
2014-01-01
Recent years have witnessed rapidly increasing interests in developing quantum theoretical models of human cognition. Quantum mechanisms have been taken seriously to describe how the mind reasons and decides. Papers in this special issue report the newest results in the field. Here we discuss why the two levels of commitment, treating the human brain as a quantum computer and merely adopting abstract quantum probability principles to model human cognition, should be integrated. We speculate that quantum cognition models gain greater modeling power due to a richer representation scheme. Copyright © 2013 Cognitive Science Society, Inc.
Pair annihilation into neutrinos in strong magnetic fields.
NASA Technical Reports Server (NTRS)
Canuto, V.; Fassio-Canuto, L.
1973-01-01
Among the processes that are of primary importance for the thermal history of a neutron star is electron-positron annihilation into neutrinos and photoneutrinos. These processes are computed in the presence of a strong magnetic field typical of neutron stars, and the results are compared with the zero-field case. It is shown that the neutrino luminosity Q(H) is greater than Q(O) for temperatures up to T about equal to 3 x 10 to the 8th power K and densities up to 1,000,000 g/cu cm.
Farabi, Sarah S.; Prasad, Bharati; Quinn, Lauretta; Carley, David W.
2014-01-01
Study Objectives: To determine the effects of dronabinol on quantitative electroencephalogram (EEG) markers of the sleep process, including power distribution and ultradian cycling in 15 patients with obstructive sleep apnea (OSA). Methods: EEG (C4-A1) relative power (% total) in the delta, theta, alpha, and sigma bands was quantified by fast Fourier transformation (FFT) over 28-second intervals. An activation ratio (AR = [alpha + sigma] / [delta + theta]) also was computed for each interval. To assess ultradian rhythms, the best-fitting cosine wave was determined for AR and each frequency band in each polysomnogram (PSG). Results: Fifteen subjects were included in the analysis. Dronabinol was associated with significantly increased theta power (p = 0.002). During the first half of the night, dronabinol decreased sigma power (p = 0.03) and AR (p = 0.03), and increased theta power (p = 0.0006). At increasing dronabinol doses, ultradian rhythms accounted for a greater fraction of EEG power variance in the delta band (p = 0.04) and AR (p = 0.03). Females had higher amplitude ultradian rhythms than males (theta: p = 0.01; sigma: p = 0.01). Decreasing AHI was associated with increasing ultradian rhythm amplitudes (sigma: p < 0.001; AR: p = 0.02). At the end of treatment, lower relative power in the theta band (p = 0.02) and lower AHI (p = 0.05) correlated with a greater decrease in sleepiness from baseline. Conclusions: This exploratory study demonstrates that in individuals with OSA, dronabinol treatment may yield a shift in EEG power toward delta and theta frequencies and a strengthening of ultradian rhythms in the sleep EEG. Citation: Farabi SS; Prasad B; Quinn L; Carley DW. Impact of dronabinol on quantitative electroencephalogram (qEEG) measures of sleep in obstructive sleep apnea syndrome. J Clin Sleep Med 2014;10(1):49-56. PMID:24426820
29 CFR 1910.243 - Guarding of portable powered tools.
Code of Federal Regulations, 2010 CFR
2010-07-01
... circular saws. (i) All portable, power-driven circular saws having a blade diameter greater than 2 in.... (2) Switches and controls. (i) All hand-held powered circular saws having a blade diameter greater... diameter, belt sanders, reciprocating saws, saber, scroll, and jig saws with blade shanks greater than a...
29 CFR 1910.243 - Guarding of portable powered tools.
Code of Federal Regulations, 2013 CFR
2013-07-01
... circular saws. (i) All portable, power-driven circular saws having a blade diameter greater than 2 in.... (2) Switches and controls. (i) All hand-held powered circular saws having a blade diameter greater... diameter, belt sanders, reciprocating saws, saber, scroll, and jig saws with blade shanks greater than a...
29 CFR 1910.243 - Guarding of portable powered tools.
Code of Federal Regulations, 2011 CFR
2011-07-01
... circular saws. (i) All portable, power-driven circular saws having a blade diameter greater than 2 in.... (2) Switches and controls. (i) All hand-held powered circular saws having a blade diameter greater... diameter, belt sanders, reciprocating saws, saber, scroll, and jig saws with blade shanks greater than a...
29 CFR 1910.243 - Guarding of portable powered tools.
Code of Federal Regulations, 2012 CFR
2012-07-01
... circular saws. (i) All portable, power-driven circular saws having a blade diameter greater than 2 in.... (2) Switches and controls. (i) All hand-held powered circular saws having a blade diameter greater... diameter, belt sanders, reciprocating saws, saber, scroll, and jig saws with blade shanks greater than a...
29 CFR 1910.243 - Guarding of portable powered tools.
Code of Federal Regulations, 2014 CFR
2014-07-01
... circular saws. (i) All portable, power-driven circular saws having a blade diameter greater than 2 in.... (2) Switches and controls. (i) All hand-held powered circular saws having a blade diameter greater... diameter, belt sanders, reciprocating saws, saber, scroll, and jig saws with blade shanks greater than a...
Approach range and velocity determination using laser sensors and retroreflector targets
NASA Technical Reports Server (NTRS)
Donovan, William J.
1991-01-01
A laser docking sensor study is currently in the third year of development. The design concept is considered to be validated. The concept is based on using standard radar techniques to provide range, velocity, and bearing information. Multiple targets are utilized to provide relative attitude data. The design requirements were to utilize existing space-qualifiable technology and require low system power, weight, and size yet, operate from 0.3 to 150 meters with a range accuracy greater than 3 millimeters and a range rate accuracy greater than 3 mm per second. The field of regard for the system is +/- 20 deg. The transmitter and receiver design features a diode laser, microlens beam steering, and power control as a function of range. The target design consists of five target sets, each having seven 3-inch retroreflectors, arranged around the docking port. The target map is stored in the sensor memory. Phase detection is used for ranging, with the frequency range-optimized. Coarse bearing measurement is provided by the scanning system (one set of binary optics) angle. Fine bearing measurement is provided by a quad detector. A MIL-STD-1750 A/B computer is used for processing. Initial test results indicate a probability of detection greater than 99 percent and a probability of false alarm less than 0.0001. The functional system is currently at the MIT/Lincoln Lab for demonstration.
Budget-based power consumption for application execution on a plurality of compute nodes
Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E
2013-02-05
Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.
Budget-based power consumption for application execution on a plurality of compute nodes
Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D
2012-10-23
Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.
Chikkagoudar, Satish; Wang, Kai; Li, Mingyao
2011-05-26
Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/.
NASA Astrophysics Data System (ADS)
Shaw, Amelia R.; Smith Sawyer, Heather; LeBoeuf, Eugene J.; McDonald, Mark P.; Hadjerioua, Boualem
2017-11-01
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2 is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. The reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.
Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.; ...
2017-10-24
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less
2011-01-01
Background Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Findings Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. Conclusions GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/. PMID:21615923
Gender- and age-related differences in heart rate dynamics: are women more complex than men?
NASA Technical Reports Server (NTRS)
Ryan, S. M.; Goldberger, A. L.; Pincus, S. M.; Mietus, J.; Lipsitz, L. A.
1994-01-01
OBJECTIVES. This study aimed to quantify the complex dynamics of beat-to-beat sinus rhythm heart rate fluctuations and to determine their differences as a function of gender and age. BACKGROUND. Recently, measures of heart rate variability and the nonlinear "complexity" of heart rate dynamics have been used as indicators of cardiovascular health. Because women have lower cardiovascular risk and greater longevity than men, we postulated that there are important gender-related differences in beat-to-beat heart rate dynamics. METHODS. We analyzed heart rate dynamics during 8-min segments of continuous electrocardiographic recording in healthy young (20 to 39 years old), middle-aged (40 to 64 years old) and elderly (65 to 90 years old) men (n = 40) and women (n = 27) while they performed spontaneous and metronomic (15 breaths/min) breathing. Relatively high (0.15 to 0.40 Hz) and low (0.01 to 0.15 Hz) frequency components of heart rate variability were computed using spectral analysis. The overall "complexity" of each heart rate time series was quantified by its approximate entropy, a measure of regularity derived from nonlinear dynamics ("chaos" theory). RESULTS. Mean heart rate did not differ between the age groups or genders. High frequency heart rate power and the high/low frequency power ratio decreased with age in both men and women (p < 0.05). The high/low frequency power ratio during spontaneous and metronomic breathing was greater in women than men (p < 0.05). Heart rate approximate entropy decreased with age and was higher in women than men (p < 0.05). CONCLUSIONS. High frequency heart rate spectral power (associated with parasympathetic activity) and the overall complexity of heart rate dynamics are higher in women than men. These complementary findings indicate the need to account for gender-as well as age-related differences in heart rate dynamics. Whether these gender differences are related to lower cardiovascular disease risk and greater longevity in women requires further study.
Computation Directorate 2008 Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, D L
2009-03-25
Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to itsmore » 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.« less
What Is A Picture Archiving And Communication System (PACS)?
NASA Astrophysics Data System (ADS)
Marceau, Carla
1982-01-01
A PACS is a digital system for acquiring, storing, moving and displaying picture or image information. It is an alternative to film jackets that has been made possible by recent breakthroughs in computer technology: telecommunications, local area nets and optical disks. The fundamental concept of the digital representation of image information is introduced. It is shown that freeing images from a material representation on film or paper leads to a dramatic increase in flexibility in our use of the images. The ultimate goal of a medical PACS system is a radiology department without film jackets. The inherent nature of digital images and the power of the computer allow instant free "copies" of images to be made and thrown away. These copies can be transmitted to distant sites in seconds, without the "original" ever leaving the archives of the radiology department. The result is a radiology department with much freer access to patient images and greater protection against lost or misplaced image information. Finally, images in digital form can be treated as data for the computer in image processing, which includes enhancement, reconstruction and even computer-aided analysis.
Description of a MIL-STD-1553B Data Bus Ada Driver for the LeRC EPS Testbed
NASA Technical Reports Server (NTRS)
Mackin, Michael A.
1995-01-01
This document describes the software designed to provide communication between control computers in the NASA Lewis Research Center Electrical Power System Testbed using MIL-STD-1553B. The software drivers are coded in the Ada programming language and were developed on a MSDOS-based computer workstation. The Electrical Power System (EPS) Testbed is a reduced-scale prototype space station electrical power system. The power system manages and distributes electrical power from the sources (batteries or photovoltaic arrays) to the end-user loads. The electrical system primary operates at 120 volts DC, and the secondary system operates at 28 volts DC. The devices which direct the flow of electrical power are controlled by a network of six control computers. Data and control messages are passed between the computers using the MIL-STD-1553B network. One of the computers, the Power Management Controller (PMC), controls the primary power distribution and another, the Load Management Controller (LMC), controls the secondary power distribution. Each of these computers communicates with two other computers which act as subsidiary controllers. These subsidiary controllers are, in turn, connected to the devices which directly control the flow of electrical power.
I am my (high-power) role: power and role identification.
Joshi, Priyanka D; Fast, Nathanael J
2013-07-01
Research indicates that power liberates the self, but findings also show that the powerful are susceptible to situational influences. The present article examines whether enacting roles that afford power leads people to identify with the roles or, instead, liberates them from role expectations altogether. The results of three experiments support the hypothesis that power enhances role identification. Experiment 1 showed that enacting a particular role resulted in greater implicit and explicit role identification when the role contained power. In Experiment 2, infusing a role with power resulted in greater role identification and role-congruent behavior. Experiment 3 demonstrated that power resulted in greater role-congruent self-construal, such that having power in a close relationship caused participants to define themselves relationally, whereas having power in a group situation caused participants to embrace a collective self-construal. Implications for research on power, roles, and the self are discussed.
NASA Technical Reports Server (NTRS)
Koch, L. Danielle
2012-01-01
Fan inflow distortion tone noise has been studied computationally and experimentally. Data from two experiments in the NASA Glenn Advanced Noise Control Fan rig have been used to validate acoustic predictions. The inflow to the fan was distorted by cylindrical rods inserted radially into the inlet duct one rotor chord length upstream of the fan. The rods were arranged in both symmetric and asymmetric circumferential patterns. In-duct and farfield sound pressure level measurements were recorded. It was discovered that for positive circumferential modes, measured circumferential mode sound power levels in the exhaust duct were greater than those in the inlet duct and for negative circumferential modes, measured total circumferential mode sound power levels in the exhaust were less than those in the inlet. Predicted trends in overall sound power level were proven to be useful in identifying circumferentially asymmetric distortion patterns that reduce overall inlet distortion tone noise, as compared to symmetric arrangements of rods. Detailed comparisons between the measured and predicted radial mode sound power in the inlet and exhaust duct indicate limitations of the theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-08-21
Recent advancements in technology scaling have shown a trend towards greater integration with large-scale chips containing thousands of processors connected to memories and other I/O devices using non-trivial network topologies. Software simulation proves insufficient to study the tradeoffs in such complex systems due to slow execution time, whereas hardware RTL development is too time-consuming. We present OpenSoC Fabric, an on-chip network generation infrastructure which aims to provide a parameterizable and powerful on-chip network generator for evaluating future high performance computing architectures based on SoC technology. OpenSoC Fabric leverages a new hardware DSL, Chisel, which contains powerful abstractions provided by itsmore » base language, Scala, and generates both software (C++) and hardware (Verilog) models from a single code base. The OpenSoC Fabric2 infrastructure is modeled after existing state-of-the-art simulators, offers large and powerful collections of configuration options, and follows object-oriented design and functional programming to make functionality extension as easy as possible.« less
Current techniques in acid-chloride corrosion control and monitoring at The Geysers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirtz, Paul; Buck, Cliff; Kunzman, Russell
1991-01-01
Acid chloride corrosion of geothermal well casings, production piping and power plant equipment has resulted in costly corrosion damage, frequent curtailments of power plants and the permanent shut-in of wells in certain areas of The Geysers. Techniques have been developed to mitigate these corrosion problems, allowing continued production of steam from high chloride wells with minimal impact on production and power generation facilities.The optimization of water and caustic steam scrubbing, steam/liquid separation and process fluid chemistry has led to effective and reliable corrosion mitigation systems currently in routine use at The Geysers. When properly operated, these systems can yield steammore » purities equal to or greater than those encountered in areas of The Geysers where chloride corrosion is not a problem. Developments in corrosion monitoring techniques, steam sampling and analytical methodologies for trace impurities, and computer modeling of the fluid chemistry has been instrumental in the success of this technology.« less
X-wing fly-by-wire vehicle management system
NASA Technical Reports Server (NTRS)
Fischer, Jr., William C. (Inventor)
1990-01-01
A complete, computer based, vehicle management system (VMS) for X-Wing aircraft using digital fly-by-wire technology controlling many subsystems and providing functions beyond the classical aircraft flight control system. The vehicle management system receives input signals from a multiplicity of sensors and provides commands to a large number of actuators controlling many subsystems. The VMS includes--segregating flight critical and mission critical factors and providing a greater level of back-up or redundancy for the former; centralizing the computation of functions utilized by several subsystems (e.g. air data, rotor speed, etc.); integrating the control of the flight control functions, the compressor control, the rotor conversion control, vibration alleviation by higher harmonic control, engine power anticipation and self-test, all in the same flight control computer (FCC) hardware units. The VMS uses equivalent redundancy techniques to attain quadruple equivalency levels; includes alternate modes of operation and recovery means to back-up any functions which fail; and uses back-up control software for software redundancy.
Final Report. Institute for Ultralscale Visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Kwan-Liu; Galli, Giulia; Gygi, Francois
The SciDAC Institute for Ultrascale Visualization brought together leading experts from visualization, high-performance computing, and science application areas to make advanced visualization solutions for SciDAC scientists and the broader community. Over the five-year project, the Institute introduced many new enabling visualization techniques, which have significantly enhanced scientists’ ability to validate their simulations, interpret their data, and communicate with others about their work and findings. This Institute project involved a large number of junior and student researchers, who received the opportunities to work on some of the most challenging science applications and gain access to the most powerful high-performance computing facilitiesmore » in the world. They were readily trained and prepared for facing the greater challenges presented by extreme-scale computing. The Institute’s outreach efforts, through publications, workshops and tutorials, successfully disseminated the new knowledge and technologies to the SciDAC and the broader scientific communities. The scientific findings and experience of the Institute team helped plan the SciDAC3 program.« less
"Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2009-01-01
Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…
Sharma, Shrushrita; Zhang, Yunyan
2017-01-01
Loss of tissue coherency in brain white matter is found in many neurological diseases such as multiple sclerosis (MS). While several approaches have been proposed to evaluate white matter coherency including fractional anisotropy and fiber tracking in diffusion-weighted imaging, few are available for standard magnetic resonance imaging (MRI). Here we present an image post-processing method for this purpose based on Fourier transform (FT) power spectrum. T2-weighted images were collected from 19 patients (10 relapsing-remitting and 9 secondary progressive MS) and 19 age- and gender-matched controls. Image processing steps included: computation, normalization, and thresholding of FT power spectrum; determination of tissue alignment profile and dominant alignment direction; and calculation of alignment complexity using a new measure named angular entropy. To test the validity of this method, we used a highly organized brain white matter structure, corpus callosum. Six regions of interest were examined from the left, central and right aspects of both genu and splenium. We found that the dominant orientation of each ROI derived from our method was significantly correlated with the predicted directions based on anatomy. There was greater angular entropy in patients than controls, and a trend to be greater in secondary progressive MS patients. These findings suggest that it is possible to detect tissue alignment and anisotropy using traditional MRI, which are routinely acquired in clinical practice. Analysis of FT power spectrum may become a new approach for advancing the evaluation and management of patients with MS and similar disorders. Further confirmation is warranted.
Proposal for grid computing for nuclear applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.
2014-02-12
The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.
Shultzaberger, Ryan K.; Paddock, Mark L.; Katsuki, Takeo; Greenspan, Ralph J.; Golden, Susan S.
2016-01-01
The temporal measurement of a bioluminescent reporter has proven to be one of the most powerful tools for characterizing circadian rhythms in the cyanobacterium Synechococcus elongatus. Primarily, two approaches have been used to automate this process: (1) detection of cell culture bioluminescence in 96-well plates by a photomultiplier tube-based plate-cycling luminometer (TopCount Microplate Scintillation and Luminescence Counter, Perkin Elmer) and (2) detection of individual colony bioluminescence by iteratively rotating a Petri dish under a cooled CCD camera using a computer-controlled turntable. Each approach has distinct advantages. The TopCount provides a more quantitative measurement of bioluminescence, enabling the direct comparison of clock output levels among strains. The computer-controlled turntable approach has a shorter set-up time and greater throughput, making it a more powerful phenotypic screening tool. While the latter approach is extremely useful, only a few labs have been able to build such an apparatus because of technical hurdles involved in coordinating and controlling both the camera and the turntable, and in processing the resulting images. This protocol provides instructions on how to construct, use, and process data from a computer-controlled turntable to measure the temporal changes in bioluminescence of individual cyanobacterial colonies. Furthermore, we describe how to prepare samples for use with the TopCount to minimize experimental noise, and generate meaningful quantitative measurements of clock output levels for advanced analysis. PMID:25662451
Flood Forecasting in Wales: Challenges and Solutions
NASA Astrophysics Data System (ADS)
How, Andrew; Williams, Christopher
2015-04-01
With steep, fast-responding river catchments, exposed coastal reaches with large tidal ranges and large population densities in some of the most at-risk areas; flood forecasting in Wales presents many varied challenges. Utilising advances in computing power and learning from best practice within the United Kingdom and abroad have seen significant improvements in recent years - however, many challenges still remain. Developments in computing and increased processing power comes with a significant price tag; greater numbers of data sources and ensemble feeds brings a better understanding of uncertainty but the wealth of data needs careful management to ensure a clear message of risk is disseminated; new modelling techniques utilise better and faster computation, but lack the history of record and experience gained from the continued use of more established forecasting models. As a flood forecasting team we work to develop coastal and fluvial forecasting models, set them up for operational use and manage the duty role that runs the models in real time. An overview of our current operational flood forecasting system will be presented, along with a discussion on some of the solutions we have in place to address the challenges we face. These include: • real-time updating of fluvial models • rainfall forecasting verification • ensemble forecast data • longer range forecast data • contingency models • offshore to nearshore wave transformation • calculation of wave overtopping
A comparative study of optimum and suboptimum direct-detection laser ranging receivers
NASA Technical Reports Server (NTRS)
Abshire, J. B.
1978-01-01
A summary of previously proposed receiver strategies for direct-detection laser ranging receivers is presented. Computer simulations are used to compare performance of candidate implementation strategies in the 1- to 100-photoelectron region. Under the condition of no background radiation, the maximum-likelihood and minimum mean-square error estimators were found to give the same performance for both bell-shaped and rectangular optical-pulse shapes. For signal energies greater than 100 photoelectrons, the root-mean-square range error is shown to decrease as Q to the -1/2 power for bell-shaped pulses and Q to the -1 power for rectangular pulses, where Q represents the average pulse energy. Of several receiver implementations presented, the matched-filter peak detector was found to be preferable. A similar configuration, using a constant-fraction discriminator, exhibited a signal-level dependent time bias.
Geometric morphometrics and virtual anthropology: advances in human evolutionary studies.
Rein, Thomas R; Harvati, Katerina
2014-01-01
Geometric morphometric methods have been increasingly used in paleoanthropology in the last two decades, lending greater power to the analysis and interpretation of the human fossil record. More recently the advent of the wide use of computed tomography and surface scanning, implemented in combination with geometric morphometrics (GM), characterizes a new approach, termed Virtual Anthropology (VA). These methodological advances have led to a number of developments in human evolutionary studies. We present some recent examples of GM and VA related research in human evolution with an emphasis on work conducted at the University of Tübingen and other German research institutions.
Strategic Adaptation of SCA for STRS
NASA Technical Reports Server (NTRS)
Quinn, Todd; Kacpura, Thomas
2007-01-01
The Space Telecommunication Radio System (STRS) architecture is being developed to provide a standard framework for future NASA space radios with greater degrees of interoperability and flexibility to meet new mission requirements. The space environment imposes unique operational requirements with restrictive size, weight, and power constraints that are significantly smaller than terrestrial-based military communication systems. With the harsh radiation environment of space, the computing and processing resources are typically one or two generations behind current terrestrial technologies. Despite these differences, there are elements of the SCA that can be adapted to facilitate the design and implementation of the STRS architecture.
MEMS-based power generation techniques for implantable biosensing applications.
Lueke, Jonathan; Moussa, Walied A
2011-01-01
Implantable biosensing is attractive for both medical monitoring and diagnostic applications. It is possible to monitor phenomena such as physical loads on joints or implants, vital signs, or osseointegration in vivo and in real time. Microelectromechanical (MEMS)-based generation techniques can allow for the autonomous operation of implantable biosensors by generating electrical power to replace or supplement existing battery-based power systems. By supplementing existing battery-based power systems for implantable biosensors, the operational lifetime of the sensor is increased. In addition, the potential for a greater amount of available power allows additional components to be added to the biosensing module, such as computational and wireless and components, improving functionality and performance of the biosensor. Photovoltaic, thermovoltaic, micro fuel cell, electrostatic, electromagnetic, and piezoelectric based generation schemes are evaluated in this paper for applicability for implantable biosensing. MEMS-based generation techniques that harvest ambient energy, such as vibration, are much better suited for implantable biosensing applications than fuel-based approaches, producing up to milliwatts of electrical power. High power density MEMS-based approaches, such as piezoelectric and electromagnetic schemes, allow for supplemental and replacement power schemes for biosensing applications to improve device capabilities and performance. In addition, this may allow for the biosensor to be further miniaturized, reducing the need for relatively large batteries with respect to device size. This would cause the implanted biosensor to be less invasive, increasing the quality of care received by the patient.
Structural Weight Estimation for Launch Vehicles
NASA Technical Reports Server (NTRS)
Cerro, Jeff; Martinovic, Zoran; Su, Philip; Eldred, Lloyd
2002-01-01
This paper describes some of the work in progress to develop automated structural weight estimation procedures within the Vehicle Analysis Branch (VAB) of the NASA Langley Research Center. One task of the VAB is to perform system studies at the conceptual and early preliminary design stages on launch vehicles and in-space transportation systems. Some examples of these studies for Earth to Orbit (ETO) systems are the Future Space Transportation System [1], Orbit On Demand Vehicle [2], Venture Star [3], and the Personnel Rescue Vehicle[4]. Structural weight calculation for launch vehicle studies can exist on several levels of fidelity. Typically historically based weight equations are used in a vehicle sizing program. Many of the studies in the vehicle analysis branch have been enhanced in terms of structural weight fraction prediction by utilizing some level of off-line structural analysis to incorporate material property, load intensity, and configuration effects which may not be captured by the historical weight equations. Modification of Mass Estimating Relationships (MER's) to assess design and technology impacts on vehicle performance are necessary to prioritize design and technology development decisions. Modern CAD/CAE software, ever increasing computational power and platform independent computer programming languages such as JAVA provide new means to create greater depth of analysis tools which can be included into the conceptual design phase of launch vehicle development. Commercial framework computing environments provide easy to program techniques which coordinate and implement the flow of data in a distributed heterogeneous computing environment. It is the intent of this paper to present a process in development at NASA LaRC for enhanced structural weight estimation using this state of the art computational power.
Welch, Martha G; Stark, Raymond I; Grieve, Philip G; Ludwig, Robert J; Isler, Joseph R; Barone, Joseph L; Myers, Michael M
2017-12-01
Premature delivery and maternal separation during hospitalisation increase infant neurodevelopmental risk. Previously, a randomised controlled trial of Family Nurture Intervention (FNI) in the neonatal intensive care unit demonstrated improvement across multiple mother and infant domains including increased electroencephalographic (EEG) power in the frontal polar region at term age. New aims were to quantify developmental changes in EEG power in all brain regions and frequencies and correlate developmental changes in EEG power among regions. EEG (128 electrodes) was obtained at 34-44 weeks postmenstrual age from preterm infants born 26-34 weeks. Forty-four infants were treated with Standard Care and 53 with FNI. EEG power was computed in 10 frequency bands (1-48 Hz) in 10 brain regions and in active and quiet sleep. Percent change/week in EEG power was increased in FNI in 132/200 tests (p < 0.05), 117 tests passed a 5% False Discovery Rate threshold. In addition, FNI demonstrated greater regional independence in those developmental rates of change. This study strengthens the conclusion that FNI promotes cerebral cortical development of preterm infants. The findings indicate that developmental changes in EEG may provide biomarkers for risk in preterm infants as well as proximal markers of effects of FNI. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Computer Power: Part 1: Distribution of Power (and Communications).
ERIC Educational Resources Information Center
Price, Bennett J.
1988-01-01
Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
Is College Pricing Power Pro-Cyclical?
ERIC Educational Resources Information Center
Altringer, Levi; Summers, Jeffrey
2015-01-01
We define pricing power as a college's ability to increase its net tuition revenue by raising its sticker-price for tuition. The greater is the positive effect of sticker-price increases on net tuition revenue, the greater is the pricing power. We gauge variation in the pricing power of private, non-profit baccalaureate colleges by estimating this…
Situational Awareness from a Low-Cost Camera System
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Ward, David; Lesage, John
2010-01-01
A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.
Performance and Reliability of Bonded Interfaces for High-Temperature Packaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paret, Paul P
2017-08-02
Sintered silver has proven to be a promising candidate for use as a die-attach and substrate-attach material in automotive power electronics components. It holds promise of greater reliability than lead-based and lead-free solders, especially at higher temperatures (>200 degrees C). Accurate predictive lifetime models of sintered silver need to be developed and its failure mechanisms thoroughly characterized before it can be deployed as a die-attach or substrate-attach material in wide-bandgap device-based packages. Mechanical characterization tests that result in stress-strain curves and accelerated tests that produce cycles-to-failure result will be conducted. Also, we present a finite element method (FEM) modeling methodologymore » that can offer greater accuracy in predicting the failure of sintered silver under accelerated thermal cycling. A fracture mechanics-based approach is adopted in the FEM model, and J-integral/thermal cycle values are computed.« less
Overall, Nickola C.; Hammond, Matthew D.; McNulty, James K.; Finkel, Eli J.
2016-01-01
When does power in intimate relationships shape important interpersonal behaviors, such as psychological aggression? Five studies tested whether possessing low relationship power was associated with aggressive responses, but (1) only within power-relevant relationship interactions when situational power was low, and (2) only by men because masculinity (but not femininity) involves the possession and demonstration of power. In Studies 1 and 2, men lower in relationship power exhibited greater aggressive communication during couples’ observed conflict discussions, but only when they experienced low situational power because they were unable to influence their partner. In Study 3, men lower in relationship power reported greater daily aggressive responses toward their partner, but only on days when they experienced low situational power because they were either (a) unable to influence their partner or (b) dependent on their partner for support. In Study 4, men who possessed lower relationship power exhibited greater aggressive responses during couples’ support-relevant discussions, but only when they had low situational power because they needed high levels of support. Study 5 provided evidence for the theoretical mechanism underlying men’s aggressive responses to low relationship power. Men who possessed lower relationship power felt less manly on days they faced low situational power because their partner was unwilling to change to resolve relationship problems, which in turn predicted greater aggressive responses to their partner. These results demonstrate that fully understanding when and why power is associated with interpersonal behavior requires differentiating between relationship and situational power. PMID:27442766
Overall, Nickola C; Hammond, Matthew D; McNulty, James K; Finkel, Eli J
2016-08-01
When does power in intimate relationships shape important interpersonal behaviors, such as psychological aggression? Five studies tested whether possessing low relationship power was associated with aggressive responses, but (a) only within power-relevant relationship interactions when situational power was low, and (b) only by men because masculinity (but not femininity) involves the possession and demonstration of power. In Studies 1 and 2, men lower in relationship power exhibited greater aggressive communication during couples' observed conflict discussions, but only when they experienced low situational power because they were unable to influence their partner. In Study 3, men lower in relationship power reported greater daily aggressive responses toward their partner, but only on days when they experienced low situational power because they were either (a) unable to influence their partner or (b) dependent on their partner for support. In Study 4, men who possessed lower relationship power exhibited greater aggressive responses during couples' support-relevant discussions, but only when they had low situational power because they needed high levels of support. Study 5 provided evidence for the theoretical mechanism underlying men's aggressive responses to low relationship power. Men who possessed lower relationship power felt less manly on days they faced low situational power because their partner was unwilling to change to resolve relationship problems, which in turn predicted greater aggressive behavior toward their partner. These results demonstrate that fully understanding when and why power is associated with interpersonal behavior requires differentiating between relationship and situational power. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
..., ``Configuration Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants... Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory..., Reviews, and Audits for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This...
Quantum computing: a prime modality in neurosurgery's future.
Lee, Brian; Liu, Charles Y; Apuzzo, Michael L J
2012-11-01
With each significant development in the field of neurosurgery, our dependence on computers, small and large, has continuously increased. From something as mundane as bipolar cautery to sophisticated intraoperative navigation with real-time magnetic resonance imaging-assisted surgical guidance, both technologies, however simple or complex, require computational processing power to function. The next frontier for neurosurgery involves developing a greater understanding of the brain and furthering our capabilities as surgeons to directly affect brain circuitry and function. This has come in the form of implantable devices that can electronically and nondestructively influence the cortex and nuclei with the purpose of restoring neuronal function and improving quality of life. We are now transitioning from devices that are turned on and left alone, such as vagus nerve stimulators and deep brain stimulators, to "smart" devices that can listen and react to the body as the situation may dictate. The development of quantum computers and their potential to be thousands, if not millions, of times faster than current "classical" computers, will significantly affect the neurosciences, especially the field of neurorehabilitation and neuromodulation. Quantum computers may advance our understanding of the neural code and, in turn, better develop and program implantable neural devices. When quantum computers reach the point where we can actually implant such devices in patients, the possibilities of what can be done to interface and restore neural function will be limitless. Copyright © 2012 Elsevier Inc. All rights reserved.
MultiPhyl: a high-throughput phylogenomics webserver using distributed computing
Keane, Thomas M.; Naughton, Thomas J.; McInerney, James O.
2007-01-01
With the number of fully sequenced genomes increasing steadily, there is greater interest in performing large-scale phylogenomic analyses from large numbers of individual gene families. Maximum likelihood (ML) has been shown repeatedly to be one of the most accurate methods for phylogenetic construction. Recently, there have been a number of algorithmic improvements in maximum-likelihood-based tree search methods. However, it can still take a long time to analyse the evolutionary history of many gene families using a single computer. Distributed computing refers to a method of combining the computing power of multiple computers in order to perform some larger overall calculation. In this article, we present the first high-throughput implementation of a distributed phylogenetics platform, MultiPhyl, capable of using the idle computational resources of many heterogeneous non-dedicated machines to form a phylogenetics supercomputer. MultiPhyl allows a user to upload hundreds or thousands of amino acid or nucleotide alignments simultaneously and perform computationally intensive tasks such as model selection, tree searching and bootstrapping of each of the alignments using many desktop machines. The program implements a set of 88 amino acid models and 56 nucleotide maximum likelihood models and a variety of statistical methods for choosing between alternative models. A MultiPhyl webserver is available for public use at: http://www.cs.nuim.ie/distributed/multiphyl.php. PMID:17553837
Code of Federal Regulations, 2011 CFR
2011-01-01
... RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C WASTE General Provisions § 72.1 Purpose. The... receive, transfer, and possess power reactor spent fuel, power reactor-related Greater than Class C (GTCC... reactor spent fuel, high-level radioactive waste, power reactor-related GTCC waste, and other radioactive...
Code of Federal Regulations, 2010 CFR
2010-01-01
... RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C WASTE General Provisions § 72.1 Purpose. The... receive, transfer, and possess power reactor spent fuel, power reactor-related Greater than Class C (GTCC... reactor spent fuel, high-level radioactive waste, power reactor-related GTCC waste, and other radioactive...
A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty
Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab; ...
2016-11-21
Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less
Evaluation of dynamical models: dissipative synchronization and other techniques.
Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A B
2006-12-01
Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams--which in turn is much greater than, say, that of correlation dimension--but at a much lower computational cost.
Rose, D. V.; Madrid, E. A.; Welch, D. R.; ...
2015-03-04
Numerical simulations of a vacuum post-hole convolute driven by magnetically insulated vacuum transmission lines (MITLs) are used to study current losses due to charged particle emission from the MITL-convolute-system electrodes. This work builds on the results of a previous study [E.A. Madrid et al. Phys. Rev. ST Accel. Beams 16, 120401 (2013)] and adds realistic power pulses, Ohmic heating of anode surfaces, and a model for the formation and evolution of cathode plasmas. The simulations suggest that modestly larger anode-cathode gaps in the MITLs upstream of the convolute result in significantly less current loss. In addition, longer pulse durations leadmore » to somewhat greater current loss due to cathode-plasma expansion. These results can be applied to the design of future MITL-convolute systems for high-current pulsed-power systems.« less
A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab
Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less
Stream-temperature characteristics in Georgia
Dyar, T.R.; Alhadeff, S. Jack
1997-01-01
Stream-temperature measurements for 198 periodic and 22 daily record stations were analyzed using a harmonic curve-fitting procedure. Statistics of data from 78 selected stations were used to compute a statewide stream-temperature harmonic equation, derived using latitude, drainage area, and altitude for natural streams having drainage areas greater than about 40 square miles. Based on the 1955-84 reference period, the equation may be used to compute long-term natural harmonic stream-temperature coefficients to within an on average of about 0.4? C. Basin-by-basin summaries of observed long-term stream-temperature characteristics are included for selected stations and river reaches, particularly along Georgia's mainstem streams. Changes in the stream- temperature regimen caused by the effects of development, principally impoundments and thermal power plants, are shown by comparing harmonic curves and coefficients from the estimated natural values to the observed modified-condition values.
Distributive, Non-destructive Real-time System and Method for Snowpack Monitoring
NASA Technical Reports Server (NTRS)
Frolik, Jeff (Inventor); Skalka, Christian (Inventor)
2013-01-01
A ground-based system that provides quasi real-time measurement and collection of snow-water equivalent (SWE) data in remote settings is provided. The disclosed invention is significantly less expensive and easier to deploy than current methods and less susceptible to terrain and snow bridging effects. Embodiments of the invention include remote data recovery solutions. Compared to current infrastructure using existing SWE technology, the disclosed invention allows more SWE sites to be installed for similar cost and effort, in a greater variety of terrain; thus, enabling data collection at improved spatial resolutions. The invention integrates a novel computational architecture with new sensor technologies. The invention's computational architecture is based on wireless sensor networks, comprised of programmable, low-cost, low-powered nodes capable of sophisticated sensor control and remote data communication. The invention also includes measuring attenuation of electromagnetic radiation, an approach that is immune to snow bridging and significantly reduces sensor footprints.
A goodness-of-fit test for capture-recapture model M(t) under closure
Stanley, T.R.; Burnham, K.P.
1999-01-01
A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.
Bounds on the power of proofs and advice in general physical theories.
Lee, Ciarán M; Hoban, Matty J
2016-06-01
Quantum theory presents us with the tools for computational and communication advantages over classical theory. One approach to uncovering the source of these advantages is to determine how computation and communication power vary as quantum theory is replaced by other operationally defined theories from a broad framework of such theories. Such investigations may reveal some of the key physical features required for powerful computation and communication. In this paper, we investigate how simple physical principles bound the power of two different computational paradigms which combine computation and communication in a non-trivial fashion: computation with advice and interactive proof systems. We show that the existence of non-trivial dynamics in a theory implies a bound on the power of computation with advice. Moreover, we provide an explicit example of a theory with no non-trivial dynamics in which the power of computation with advice is unbounded. Finally, we show that the power of simple interactive proof systems in theories where local measurements suffice for tomography is non-trivially bounded. This result provides a proof that [Formula: see text] is contained in [Formula: see text], which does not make use of any uniquely quantum structure-such as the fact that observables correspond to self-adjoint operators-and thus may be of independent interest.
2013-01-01
Background Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Methods Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. Results The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. Conclusions The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r < 0.7). Substantially greater power can be achieved when comparing measures of a very high correlation (r > 0.9). PMID:23721463
NASA Astrophysics Data System (ADS)
Ou, Shiqi; Zhao, Yi; Aaron, Douglas S.; Regan, John M.; Mench, Matthew M.
2016-10-01
This work describes experiments and computational simulations to analyze single-chamber, air-cathode microbial fuel cell (MFC) performance and cathodic limitations in terms of current generation, power output, mass transport, biomass competition, and biofilm growth. Steady-state and transient cathode models were developed and experimentally validated. Two cathode gas mixtures were used to explore oxygen transport in the cathode: the MFCs exposed to a helium-oxygen mixture (heliox) produced higher current and power output than the group of MFCs exposed to air or a nitrogen-oxygen mixture (nitrox), indicating a dependence on gas-phase transport in the cathode. Multi-substance transport, biological reactions, and electrochemical reactions in a multi-layer and multi-biomass cathode biofilm were also simulated in a transient model. The transient model described biofilm growth over 15 days while providing insight into mass transport and cathodic dissolved species concentration profiles during biofilm growth. Simulation results predict that the dissolved oxygen content and diffusion in the cathode are key parameters affecting the power output of the air-cathode MFC system, with greater oxygen content in the cathode resulting in increased power output and fully-matured biomass.
Effect of plasma power on reduction of printable graphene oxide thin films on flexible substrates
NASA Astrophysics Data System (ADS)
Banerjee, Indrani; Mahapatra, Santosh K.; Pal, Chandana; Sharma, Ashwani K.; Ray, Asim K.
2018-05-01
Room temperature hydrogen plasma treatment on solution processed 300 nm graphene oxide (GO) films on flexible indium tin oxide (ITO) coated polyethylene terephthalate (PET) substrates has been performed by varying the plasma power between 20 W and 60 W at a constant exposure time of 30 min with a view to examining the effect of plasma power on reduction of GO. X-ray powder diffraction (XRD) and Raman spectroscopic studies show that high energy hydrogen species generated in the plasma assist fast exfoliation of the oxygenated functional groups present in the GO samples. Significant decrease in the optical band gap is observed from 4.1 eV for untreated samples to 0.5 eV for 60 W plasma treated samples. The conductivity of the films treated with 60 W plasma power is estimated to be six orders of magnitude greater than untreated GO films and this enhancement of conductivity on plasma reduction has been interpreted in terms of UV-visible absorption spectra and density functional based first principle computational calculations. Plasma reduction of GO/ITO/PET structures can be used for efficiently tuning the electrical and optical properties of reduced graphene oxide (rGO) for flexible electronics applications.
Ou, Shiqi; Zhao, Yi; Aaron, Douglas S.; ...
2016-08-15
This work describes experiments and computational simulations to analyze single-chamber, air-cathode microbial fuel cell (MFC) performance and cathodic limitations in terms of current generation, power output, mass transport, biomass competition, and biofilm growth. Steady-state and transient cathode models were developed and experimentally validated. Two cathode gas mixtures were used to explore oxygen transport in the cathode: the MFCs exposed to a helium-oxygen mixture (heliox) produced higher current and power output than the group of MFCs exposed to air or a nitrogen-oxygen mixture (nitrox), indicating a dependence on gas-phase transport in the cathode. Multi-substance transport, biological reactions, and electrochemical reactions inmore » a multi-layer and multi-biomass cathode biofilm were also simulated in a transient model. The transient model described biofilm growth over 15 days while providing insight into mass transport and cathodic dissolved species concentration profiles during biofilm growth. Lastly, simulation results predict that the dissolved oxygen content and diffusion in the cathode are key parameters affecting the power output of the air-cathode MFC system, with greater oxygen content in the cathode resulting in increased power output and fully-matured biomass.« less
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
MEMS-Based Power Generation Techniques for Implantable Biosensing Applications
Lueke, Jonathan; Moussa, Walied A.
2011-01-01
Implantable biosensing is attractive for both medical monitoring and diagnostic applications. It is possible to monitor phenomena such as physical loads on joints or implants, vital signs, or osseointegration in vivo and in real time. Microelectromechanical (MEMS)-based generation techniques can allow for the autonomous operation of implantable biosensors by generating electrical power to replace or supplement existing battery-based power systems. By supplementing existing battery-based power systems for implantable biosensors, the operational lifetime of the sensor is increased. In addition, the potential for a greater amount of available power allows additional components to be added to the biosensing module, such as computational and wireless and components, improving functionality and performance of the biosensor. Photovoltaic, thermovoltaic, micro fuel cell, electrostatic, electromagnetic, and piezoelectric based generation schemes are evaluated in this paper for applicability for implantable biosensing. MEMS-based generation techniques that harvest ambient energy, such as vibration, are much better suited for implantable biosensing applications than fuel-based approaches, producing up to milliwatts of electrical power. High power density MEMS-based approaches, such as piezoelectric and electromagnetic schemes, allow for supplemental and replacement power schemes for biosensing applications to improve device capabilities and performance. In addition, this may allow for the biosensor to be further miniaturized, reducing the need for relatively large batteries with respect to device size. This would cause the implanted biosensor to be less invasive, increasing the quality of care received by the patient. PMID:22319362
Energy Efficiency Challenges of 5G Small Cell Networks.
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-05-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks.
Energy Efficiency Challenges of 5G Small Cell Networks
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-01-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. PMID:28757670
Dispersive FDTD analysis of induced electric field in human models due to electrostatic discharge.
Hirata, Akimasa; Nagai, Toshihiro; Koyama, Teruyoshi; Hattori, Junya; Chan, Kwok Hung; Kavet, Robert
2012-07-07
Contact currents flow from/into a charged human body when touching a grounded conductive object. An electrostatic discharge (ESD) or spark may occur just before contact or upon release. The current may stimulate muscles and peripheral nerves. In order to clarify the difference in the induced electric field between different sized human models, the in-situ electric fields were computed in anatomically based models of adults and a child for a contact current in a human body following ESD. A dispersive finite-difference time-domain method was used, in which biological tissue is assumed to obey a four-pole Debye model. From our computational results, the first peak of the discharge current was almost identical across adult and child models. The decay of the induced current in the child was also faster due mainly to its smaller body capacitance compared to the adult models. The induced electric fields in the forefingers were comparable across different models. However, the electric field induced in the arm of the child model was found to be greater than that in the adult models primarily because of its smaller cross-sectional area. The tendency for greater doses in the child has also been reported for power frequency sinusoidal contact current exposures as reported by other investigators.
Energy Use and Power Levels in New Monitors and Personal Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay
2002-07-23
Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can usemore » to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC). Cur rent ENERGY STAR monitor and computer criteria do not specify off or on power, but our results suggest opportunities for saving energy in these modes. Also, significant differences between CRT and LCD technology, and between field-measured and manufacturer-reported power levels reveal the need for standard methods and metrics for measuring and comparing monitor power consumption.« less
Grindon, Christina; Harris, Sarah; Evans, Tom; Novik, Keir; Coveney, Peter; Laughton, Charles
2004-07-15
Molecular modelling played a central role in the discovery of the structure of DNA by Watson and Crick. Today, such modelling is done on computers: the more powerful these computers are, the more detailed and extensive can be the study of the dynamics of such biological macromolecules. To fully harness the power of modern massively parallel computers, however, we need to develop and deploy algorithms which can exploit the structure of such hardware. The Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a scalable molecular dynamics code including long-range Coulomb interactions, which has been specifically designed to function efficiently on parallel platforms. Here we describe the implementation of the AMBER98 force field in LAMMPS and its validation for molecular dynamics investigations of DNA structure and flexibility against the benchmark of results obtained with the long-established code AMBER6 (Assisted Model Building with Energy Refinement, version 6). Extended molecular dynamics simulations on the hydrated DNA dodecamer d(CTTTTGCAAAAG)(2), which has previously been the subject of extensive dynamical analysis using AMBER6, show that it is possible to obtain excellent agreement in terms of static, dynamic and thermodynamic parameters between AMBER6 and LAMMPS. In comparison with AMBER6, LAMMPS shows greatly improved scalability in massively parallel environments, opening up the possibility of efficient simulations of order-of-magnitude larger systems and/or for order-of-magnitude greater simulation times.
Beam and Plasma Physics Research
1990-06-01
La di~raDy in high power microwave computations and thi-ory and high energy plasma computations and theory. The HPM computations concentrated on...2.1 REPORT INDEX 7 2.2 TASK AREA 2: HIGH-POWER RF EMISSION AND CHARGED- PARTICLE BEAM PHYSICS COMPUTATION , MODELING AND THEORY 10 2.2.1 Subtask 02-01...Vulnerability of Space Assets 22 2.2.6 Subtask 02-06, Microwave Computer Program Enhancements 22 2.2.7 Subtask 02-07, High-Power Microwave Transvertron Design 23
3-D Electromagnetic field analysis of wireless power transfer system using K computer
NASA Astrophysics Data System (ADS)
Kawase, Yoshihiro; Yamaguchi, Tadashi; Murashita, Masaya; Tsukada, Shota; Ota, Tomohiro; Yamamoto, Takeshi
2018-05-01
We analyze the electromagnetic field of a wireless power transfer system using the 3-D parallel finite element method on K computer, which is a super computer in Japan. It is clarified that the electromagnetic field of the wireless power transfer system can be analyzed in a practical time using the parallel computation on K computer, moreover, the accuracy of the loss calculation becomes better as the mesh division of the shield becomes fine.
Computer program analyzes and monitors electrical power systems (POSIMO)
NASA Technical Reports Server (NTRS)
Jaeger, K.
1972-01-01
Requirements to monitor and/or simulate electric power distribution, power balance, and charge budget are discussed. Computer program to analyze power system and generate set of characteristic power system data is described. Application to status indicators to denote different exclusive conditions is presented.
NASA Technical Reports Server (NTRS)
Schilling, D. L.; Oh, S. J.; Thau, F.
1975-01-01
Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Tamakoshi, Akira; Hanyu, Takahiro
2017-08-01
In this paper, reinitialization-free nonvolatile computer systems are designed and evaluated for energy-harvesting Internet of things (IoT) applications. In energy-harvesting applications, as power supplies generated from renewable power sources cause frequent power failures, data processed need to be backed up when power failures occur. Unless data are safely backed up before power supplies diminish, reinitialization processes are required when power supplies are recovered, which results in low energy efficiencies and slow operations. Using nonvolatile devices in processors and memories can realize a faster backup than a conventional volatile computer system, leading to a higher energy efficiency. To evaluate the energy efficiency upon frequent power failures, typical computer systems including processors and memories are designed using 90 nm CMOS or CMOS/magnetic tunnel junction (MTJ) technologies. Nonvolatile ARM Cortex-M0 processors with 4 kB MRAMs are evaluated using a typical computing benchmark program, Dhrystone, which shows a few order-of-magnitude reductions in energy in comparison with a volatile processor with SRAM.
NASA Technical Reports Server (NTRS)
Purser, Paul E.; Spear, Margaret F.
1947-01-01
A wind-tunnel investigation has been made to determine the effects of unsymmetrical horizontal-tail arrangements on the power-on static longitudinal stability of a single-engine single-rotation airplane model. Although the tests and analyses showed that extreme asymmetry in the horizontal tail indicated a reduction in power effects on longitudinal stability for single-engine single-rotation airplanes, the particular "practical" arrangement tested did not show marked improvement. Differences in average downwash between the normal tail arrangement and various other tail arrangements estimated from computed values of propeller-slipstream rotation agreed with values estimated from pitching-moment test data for the flaps-up condition (low thrust and torque) and disagreed for the flaps-down condition (high thrust and torque). This disagreement indicated the necessity for continued research to determine the characteristics of the slip-stream behind various propeller-fuselage-wing combinations. Out-of-trim lateral forces and moments of the unsymmetrical tail arrangements that were best from consideration of longitudinal stability were no greater than those of the normal tail arrangement.
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration
2016-11-01
We report results of a deep all-sky search for periodic gravitational waves from isolated neutron stars in data from the S6 LIGO science run. The search was possible thanks to the computing power provided by the volunteers of the Einstein@Home distributed computing project. We find no significant signal candidate and set the most stringent upper limits to date on the amplitude of gravitational wave signals from the target population. At the frequency of best strain sensitivity, between 170.5 and 171 Hz we set a 90% confidence upper limit of 5.5 ×10-25 , while at the high end of our frequency range, around 505 Hz, we achieve upper limits ≃10-24 . At 230 Hz we can exclude sources with ellipticities greater than 10-6 within 100 pc of Earth with fiducial value of the principal moment of inertia of 1038 kg m2 . If we assume a higher (lower) gravitational wave spin-down we constrain farther (closer) objects to higher (lower) ellipticities.
Holland, Alexander; Aboy, Mateo
2009-07-01
We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.
NASA Technical Reports Server (NTRS)
Lichtenstein, J. H.
1978-01-01
An analytical method of computing the averaging effect of wing-span size on the loading of a wing induced by random turbulence was adapted for use on a digital electronic computer. The turbulence input was assumed to have a Dryden power spectral density. The computations were made for lift, rolling moment, and bending moment for two span load distributions, rectangular and elliptic. Data are presented to show the wing-span averaging effect for wing-span ratios encompassing current airplane sizes. The rectangular wing-span loading showed a slightly greater averaging effect than did the elliptic loading. In the frequency range most bothersome to airplane passengers, the wing-span averaging effect can reduce the normal lift load, and thus the acceleration, by about 7 percent for a typical medium-sized transport. Some calculations were made to evaluate the effect of using a Von Karman turbulence representation. These results showed that using the Von Karman representation generally resulted in a span averaging effect about 3 percent larger.
System-wide power management control via clock distribution network
Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.
2015-05-19
An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.
Reducing power consumption while performing collective operations on a plurality of compute nodes
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2011-10-18
Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.
High Efficiency Ka-Band Solid State Power Amplifier Waveguide Power Combiner
NASA Technical Reports Server (NTRS)
Wintucky, Edwin G.; Simons, Rainee N.; Chevalier, Christine T.; Freeman, Jon C.
2010-01-01
A novel Ka-band high efficiency asymmetric waveguide four-port combiner for coherent combining of two Monolithic Microwave Integrated Circuit (MMIC) Solid State Power Amplifiers (SSPAs) having unequal outputs has been successfully designed, fabricated and characterized over the NASA deep space frequency band from 31.8 to 32.3 GHz. The measured combiner efficiency is greater than 90 percent, the return loss greater than 18 dB and input port isolation greater than 22 dB. The manufactured combiner was designed for an input power ratio of 2:1 but can be custom designed for any arbitrary power ratio. Applications considered are NASA s space communications systems needing 6 to 10 W of radio frequency (RF) power. This Technical Memorandum (TM) is an expanded version of the article recently published in Institute of Engineering and Technology (IET) Electronics Letters.
Square Kilometre Array Science Data Processing
NASA Astrophysics Data System (ADS)
Nikolic, Bojan; SDP Consortium, SKA
2014-04-01
The Square Kilometre Array (SKA) is planned to be, by a large factor, the largest and most sensitive radio telescope ever constructed. The first phase of the telescope (SKA1), now in the design phase, will in itself represent a major leap in capabilities compared to current facilities. These advances are to a large extent being made possible by advances in available computer processing power so that that larger numbers of smaller, simpler and cheaper receptors can be used. As a result of greater reliance and demands on computing, ICT is becoming an ever more integral part of the telescope. The Science Data Processor is the part of the SKA system responsible for imaging, calibration, pulsar timing, confirmation of pulsar candidates, derivation of some further derived data products, archiving and providing the data to the users. It will accept visibilities at data rates at several TB/s and require processing power for imaging in range 100 petaFLOPS -- ~1 ExaFLOPS, putting SKA1 into the regime of exascale radio astronomy. In my talk I will present the overall SKA system requirements and how they drive these high data throughput and processing requirements. Some of the key challenges for the design of SDP are: - Identifying sufficient parallelism to utilise very large numbers of separate compute cores that will be required to provide exascale computing throughput - Managing efficiently the high internal data flow rates - A conceptual architecture and software engineering approach that will allow adaptation of the algorithms as we learn about the telescope and the atmosphere during the commissioning and operational phases - System management that will deal gracefully with (inevitably frequent) failures of individual units of the processing system In my talk I will present possible initial architectures for the SDP system that attempt to address these and other challenges.
ERIC Educational Resources Information Center
Freeman, David M.; And Others
1982-01-01
Data collected from a sample of farmers representing 15 Pakistani villages show that greater equality in village power distribution is positively related to greater adoption of agricultural technology as analyzed at the village level. When effects of water control are parceled out, the power-adoption relationship is strengthened. (LC)
77 FR 41180 - Combined Notice of Filings #2
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-12
...-2097-004. Applicants: Kansas City Power & Light Company, KCP&L Greater Missouri Operations Company. Description: Kansas City Power and Light Company and KCP&L Greater Missouri Operations Company submits their... 1522R1 Kansas City Power & Light Co. LGIA to be effective 6/5/2012. Filed Date: 7/3/12. Accession Number...
High-speed extended-term time-domain simulation for online cascading analysis of power system
NASA Astrophysics Data System (ADS)
Fu, Chuan
A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.
Relationship Power, Sexual Decision Making, and HIV Risk Among Midlife and Older Women.
Altschuler, Joanne; Rhee, Siyon
2015-01-01
The number of midlife and older women with HIV/AIDS is high and increasing, especially among women of color. This article addresses these demographic realities by reporting on findings about self-esteem, relationship power, and HIV risk from a pilot study of midlife and older women. A purposive sample (N = 110) of ethnically, economically, and educationally diverse women 40 years and older from the Greater Los Angeles Area was surveyed to determine their levels of self-esteem, general relationship power, sexual decision-making power, safer sex behaviors, and HIV knowledge. Women with higher levels of self-esteem exercised greater power in their relationships with their partner. Women with higher levels of general relationship power and self-esteem tend to exercise greater power in sexual decision making, such as having sex and choosing sexual acts. Income and sexual decision-making power were statistically significant in predicting the use of condoms. Implications and recommendations for future HIV/AIDS research and intervention targeting midlife and older women are presented.
Central Fetal Monitoring With and Without Computer Analysis: A Randomized Controlled Trial.
Nunes, Inês; Ayres-de-Campos, Diogo; Ugwumadu, Austin; Amin, Pina; Banfield, Philip; Nicoll, Antony; Cunningham, Simon; Sousa, Paulo; Costa-Santos, Cristina; Bernardes, João
2017-01-01
To evaluate whether intrapartum fetal monitoring with computer analysis and real-time alerts decreases the rate of newborn metabolic acidosis or obstetric intervention when compared with visual analysis. A randomized clinical trial carried out in five hospitals in the United Kingdom evaluated women with singleton, vertex fetuses of 36 weeks of gestation or greater during labor. Continuous central fetal monitoring by computer analysis and online alerts (experimental arm) was compared with visual analysis (control arm). Fetal blood sampling and electrocardiographic ST waveform analysis were available in both arms. The primary outcome was incidence of newborn metabolic acidosis (pH less than 7.05 and base deficit greater than 12 mmol/L). Prespecified secondary outcomes included operative delivery, use of fetal blood sampling, low 5-minute Apgar score, neonatal intensive care unit admission, hypoxic-ischemic encephalopathy, and perinatal death. A sample size of 3,660 per group (N=7,320) was planned to be able to detect a reduction in the rate of metabolic acidosis from 2.8% to 1.8% (two-tailed α of 0.05 with 80% power). From August 2011 through July 2014, 32,306 women were assessed for eligibility and 7,730 were randomized: 3,961 to computer analysis and online alerts, and 3,769 to visual analysis. Baseline characteristics were similar in both groups. Metabolic acidosis occurred in 16 participants (0.40%) in the experimental arm and 22 participants (0.58%) in the control arm (relative risk 0.69 [0.36-1.31]). No statistically significant differences were found in the incidence of secondary outcomes. Compared with visual analysis, computer analysis of fetal monitoring signals with real-time alerts did not significantly reduce the rate of metabolic acidosis or obstetric intervention. A lower-than-expected rate of newborn metabolic acidosis was observed in both arms of the trial. ISRCTN Registry, http://www.isrctn.com, ISRCTN42314164.
Coherent Activity in Bilateral Parieto-Occipital Cortices during P300-BCI Operation.
Takano, Kouji; Ora, Hiroki; Sekihara, Kensuke; Iwaki, Sunao; Kansaku, Kenji
2014-01-01
The visual P300 brain-computer interface (BCI), a popular system for electroencephalography (EEG)-based BCI, uses the P300 event-related potential to select an icon arranged in a flicker matrix. In earlier studies, we used green/blue (GB) luminance and chromatic changes in the P300-BCI system and reported that this luminance and chromatic flicker matrix was associated with better performance and greater subject comfort compared with the conventional white/gray (WG) luminance flicker matrix. To highlight areas involved in improved P300-BCI performance, we used simultaneous EEG-fMRI recordings and showed enhanced activities in bilateral and right lateralized parieto-occipital areas. Here, to capture coherent activities of the areas during P300-BCI, we collected whole-head 306-channel magnetoencephalography data. When comparing functional connectivity between the right and left parieto-occipital channels, significantly greater functional connectivity in the alpha band was observed under the GB flicker matrix condition than under the WG flicker matrix condition. Current sources were estimated with a narrow-band adaptive spatial filter, and mean imaginary coherence was computed in the alpha band. Significantly greater coherence was observed in the right posterior parietal cortex under the GB than under the WG condition. Re-analysis of previous EEG-based P300-BCI data showed significant correlations between the power of the coherence of the bilateral parieto-occipital cortices and their performance accuracy. These results suggest that coherent activity in the bilateral parieto-occipital cortices plays a significant role in effectively driving the P300-BCI.
Power-Time Curve Comparison between Weightlifting Derivatives
Suchomel, Timothy J.; Sole, Christopher J.
2017-01-01
This study examined the power production differences between weightlifting derivatives through a comparison of power-time (P-t) curves. Thirteen resistance-trained males performed hang power clean (HPC), jump shrug (JS), and hang high pull (HHP) repetitions at relative loads of 30%, 45%, 65%, and 80% of their one repetition maximum (1RM) HPC. Relative peak power (PPRel), work (WRel), and P-t curves were compared. The JS produced greater PPRel than the HPC (p < 0.001, d = 2.53) and the HHP (p < 0.001, d = 2.14). In addition, the HHP PPRel was statistically greater than the HPC (p = 0.008, d = 0.80). Similarly, the JS produced greater WRel compared to the HPC (p < 0.001, d = 1.89) and HHP (p < 0.001, d = 1.42). Furthermore, HHP WRel was statistically greater than the HPC (p = 0.003, d = 0.73). The P-t profiles of each exercise were similar during the first 80-85% of the movement; however, during the final 15-20% of the movement the P-t profile of the JS was found to be greater than the HPC and HHP. The JS produced greater PPRel and WRel compared to the HPC and HHP with large effect size differences. The HHP produced greater PPRel and WRel than the HPC with moderate effect size differences. The JS and HHP produced markedly different P-t profiles in the final 15-20% of the movement compared to the HPC. Thus, these exercises may be superior methods of training to enhance PPRel. The greatest differences in PPRel between the JS and HHP and the HPC occurred at lighter loads, suggesting that loads of 30-45% 1RM HPC may provide the best training stimulus when using the JS and HHP. In contrast, loads ranging 65-80% 1RM HPC may provide an optimal stimulus for power production during the HPC. Key points The JS and HHP exercises produced greater relative peak power and relative work compared to the HPC. Although the power-time curves were similar during the first 80-85% of the movement, the JS and HHP possessed unique power-time characteristics during the final 15-20% of the movement compared to the HPC. The JS and HHP may be effectively implemented to train peak power characteristics, especially using loads ranging from 30-45% of an individual’s 1RM HPC. The HPC may be best implemented using loads ranging from 65-80% of an individual’s 1RM HPC. PMID:28912659
Home Media and Children’s Achievement and Behavior
Hofferth, Sandra L.
2010-01-01
This study provides a national picture of the time American 6–12 year olds spent playing video games, using the computer, and watching television at home in 1997 and 2003 and the association of early use with their achievement and behavior as adolescents. Girls benefited from computers more than boys and Black children’s achievement benefited more from greater computer use than did that of White children. Greater computer use in middle childhood was associated with increased achievement for White and Black girls and Black boys, but not White boys. Greater computer play was also associated with a lower risk of becoming socially isolated among girls. Computer use does not crowd out positive learning-related activities, whereas video game playing does. Consequently, increased video game play had both positive and negative associations with the achievement of girls but not boys. For boys, increased video game play was linked to increased aggressive behavior problems. PMID:20840243
Evaluating biomechanics of user-selected sitting and standing computer workstation.
Lin, Michael Y; Barbir, Ana; Dennerlein, Jack T
2017-11-01
A standing computer workstation has now become a popular modern work place intervention to reduce sedentary behavior at work. However, user's interaction related to a standing computer workstation and its differences with a sitting workstation need to be understood to assist in developing recommendations for use and set up. The study compared the differences in upper extremity posture and muscle activity between user-selected sitting and standing workstation setups. Twenty participants (10 females, 10 males) volunteered for the study. 3-D posture, surface electromyography, and user-reported discomfort were measured while completing simulated tasks with each participant's self-selected workstation setups. Sitting computer workstation associated with more non-neutral shoulder postures and greater shoulder muscle activity, while standing computer workstation induced greater wrist adduction angle and greater extensor carpi radialis muscle activity. Sitting computer workstation also associated with greater shoulder abduction postural variation (90th-10th percentile) while standing computer workstation associated with greater variation for should rotation and wrist extension. Users reported similar overall discomfort levels within the first 10 min of work but had more than twice as much discomfort while standing than sitting after 45 min; with most discomfort reported in the low back for standing and shoulder for sitting. These different measures provide understanding in users' different interactions with sitting and standing and by alternating between the two configurations in short bouts may be a way of changing the loading pattern on the upper extremity. Copyright © 2017 Elsevier Ltd. All rights reserved.
GPS synchronized power system phase angle measurements
NASA Astrophysics Data System (ADS)
Wilson, Robert E.; Sterlina, Patrick S.
1994-09-01
This paper discusses the use of Global Positioning System (GPS) synchronized equipment for the measurement and analysis of key power system quantities. Two GPS synchronized phasor measurement units (PMU) were installed before testing. It was indicated that PMUs recorded the dynamic response of the power system phase angles when the northern California power grid was excited by the artificial short circuits. Power system planning engineers perform detailed computer generated simulations of the dynamic response of the power system to naturally occurring short circuits. The computer simulations use models of transmission lines, transformers, circuit breakers, and other high voltage components. This work will compare computer simulations of the same event with field measurement.
Canovas, Carmen; van der Mooren, Marrie; Rosén, Robert; Piers, Patricia A; Wang, Li; Koch, Douglas D; Artal, Pablo
2015-05-01
To determine the impact of the equivalent refractive index (ERI) on intraocular lens (IOL) power prediction for eyes with previous myopic laser in situ keratomileusis (LASIK) using custom ray tracing. AMO B.V., Groningen, the Netherlands, and the Department of Ophthalmology, Baylor College of Medicine, Houston, Texas, USA. Retrospective data analysis. The ERI was calculated individually from the post-LASIK total corneal power. Two methods to account for the posterior corneal surface were tested; that is, calculation from pre-LASIK data or from post-LASIK data only. Four IOL power predictions were generated using a computer-based ray-tracing technique, including individual ERI results from both calculation methods, a mean ERI over the whole population, and the ERI for normal patients. For each patient, IOL power results calculated from the four predictions as well as those obtained with the Haigis-L were compared with the optimum IOL power calculated after cataract surgery. The study evaluated 25 patients. The mean and range of ERI values determined using post-LASIK data were similar to those determined from pre-LASIK data. Introducing individual or an average ERI in the ray-tracing IOL power calculation procedure resulted in mean IOL power errors that were not significantly different from zero. The ray-tracing procedure that includes an average ERI gave a greater percentage of eyes with an IOL power prediction error within ±0.5 diopter than the Haigis-L (84% versus 52%). For IOL power determination in post-LASIK patients, custom ray tracing including a modified ERI was an accurate procedure that exceeded the current standards for normal eyes. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Reconfigurable Computing for Computational Science: A New Focus in High Performance Computing
2006-11-01
in the past decade. Researchers are regularly employing the power of large computing systems and parallel processing to tackle larger and more...complex problems in all of the physical sciences. For the past decade or so, most of this growth in computing power has been “free” with increased...the scientific computing community as a means to continued growth in computing capability. This paper offers a glimpse of the hardware and
Operating length and velocity of human M. vastus lateralis fascicles during vertical jumping
Nikolaidou, Maria Elissavet; Marzilger, Robert; Bohm, Sebastian; Mersmann, Falk
2017-01-01
Humans achieve greater jump height during a counter-movement jump (CMJ) than in a squat jump (SJ). However, the crucial difference is the mean mechanical power output during the propulsion phase, which could be determined by intrinsic neuro-muscular mechanisms for power production. We measured M. vastus lateralis (VL) fascicle length changes and activation patterns and assessed the force–length, force–velocity and power–velocity potentials during the jumps. Compared with the SJ, the VL fascicles operated on a more favourable portion of the force–length curve (7% greater force potential, i.e. fraction of VL maximum force according to the force–length relationship) and more disadvantageous portion of the force–velocity curve (11% lower force potential, i.e. fraction of VL maximum force according to the force–velocity relationship) in the CMJ, indicating a reciprocal effect of force–length and force–velocity potentials for force generation. The higher muscle activation (15%) could therefore explain the moderately greater jump height (5%) in the CMJ. The mean fascicle-shortening velocity in the CMJ was closer to the plateau of the power–velocity curve, which resulted in a greater (15%) power–velocity potential (i.e. fraction of VL maximum power according to the power–velocity relationship). Our findings provide evidence for a cumulative effect of three different mechanisms—i.e. greater force–length potential, greater power–velocity potential and greater muscle activity—for an advantaged power production in the CMJ contributing to the marked difference in mean mechanical power (56%) compared with SJ. PMID:28573027
Method of Real-Time Principal-Component Analysis
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu
2005-01-01
Dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal-component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent-based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time.
Real time ray tracing based on shader
NASA Astrophysics Data System (ADS)
Gui, JiangHeng; Li, Min
2017-07-01
Ray tracing is a rendering algorithm for generating an image through tracing lights into an image plane, it can simulate complicate optical phenomenon like refraction, depth of field and motion blur. Compared with rasterization, ray tracing can achieve more realistic rendering result, however with greater computational cost, simple scene rendering can consume tons of time. With the GPU's performance improvement and the advent of programmable rendering pipeline, complicated algorithm can also be implemented directly on shader. So, this paper proposes a new method that implement ray tracing directly on fragment shader, mainly include: surface intersection, importance sampling and progressive rendering. With the help of GPU's powerful throughput capability, it can implement real time rendering of simple scene.
Computer Power. Part 2: Electrical Power Problems and Their Amelioration.
ERIC Educational Resources Information Center
Price, Bennett J.
1989-01-01
Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…
Computational Power of Symmetry-Protected Topological Phases.
Stephen, David T; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert
2017-07-07
We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.
Computational Power of Symmetry-Protected Topological Phases
NASA Astrophysics Data System (ADS)
Stephen, David T.; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert
2017-07-01
We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.
Emulating a million machines to investigate botnets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudish, Donald W.
2010-06-01
Researchers at Sandia National Laboratories in Livermore, California are creating what is in effect a vast digital petridish able to hold one million operating systems at once in an effort to study the behavior of rogue programs known as botnets. Botnets are used extensively by malicious computer hackers to steal computing power fron Internet-connected computers. The hackers harness the stolen resources into a scattered but powerful computer that can be used to send spam, execute phishing, scams or steal digital information. These remote-controlled 'distributed computers' are difficult to observe and track. Botnets may take over parts of tens of thousandsmore » or in some cases even millions of computers, making them among the world's most powerful computers for some applications.« less
In-silico wear prediction for knee replacements--methodology and corroboration.
Strickland, M A; Taylor, M
2009-07-22
The capability to predict in-vivo wear of knee replacements is a valuable pre-clinical analysis tool for implant designers. Traditionally, time-consuming experimental tests provided the principal means of investigating wear. Today, computational models offer an alternative. However, the validity of these models has not been demonstrated across a range of designs and test conditions, and several different formulas are in contention for estimating wear rates, limiting confidence in the predictive power of these in-silico models. This study collates and retrospectively simulates a wide range of experimental wear tests using fast rigid-body computational models with extant wear prediction algorithms, to assess the performance of current in-silico wear prediction tools. The number of tests corroborated gives a broader, more general assessment of the performance of these wear-prediction tools, and provides better estimates of the wear 'constants' used in computational models. High-speed rigid-body modelling allows a range of alternative algorithms to be evaluated. Whilst most cross-shear (CS)-based models perform comparably, the 'A/A+B' wear model appears to offer the best predictive power amongst existing wear algorithms. However, the range and variability of experimental data leaves considerable uncertainty in the results. More experimental data with reduced variability and more detailed reporting of studies will be necessary to corroborate these models with greater confidence. With simulation times reduced to only a few minutes, these models are ideally suited to large-volume 'design of experiment' or probabilistic studies (which are essential if pre-clinical assessment tools are to begin addressing the degree of variation observed clinically and in explanted components).
Billeke, Pablo; Armijo, Alejandra; Castillo, Daniel; López, Tamara; Zamorano, Francisco; Cosmelli, Diego; Aboitiz, Francisco
2015-09-15
People with schizophrenia show social impairments that are related to functional outcomes. We tested the hypothesis that social interaction impairments in people with schizophrenia are related to alterations in the predictions of others' behavior and explored their underlying neurobiological mechanisms. Electroencephalography was performed in 20 patients with schizophrenia and 25 well-matched control subjects. Participants played as proposers in the repeated version of the Ultimatum Game believing that they were playing with another human or with a computer. The power of oscillatory brain activity was obtained by means of the wavelet transform. We performed a trial-by-trial correlation between the oscillatory activity and the risk of the offer. Control subjects adapted their offers when playing with computers and tended to maintain their offers when playing with humans, as such revealing learning and bargaining strategies, respectively. People with schizophrenia presented the opposite pattern of behavior in both games. During the anticipation of others' responses, the power of alpha oscillations correlated with the risk of the offers made, in a different way in both games. Patients with schizophrenia presented a greater correlation in computer games than in human games; control subjects showed the opposite pattern. The alpha activity correlated with positive symptoms. Our results reveal an alteration in social interaction in patients with schizophrenia that is related to oscillatory brain activity, suggesting maladjustment of expectation when patients face social and nonsocial agents. This alteration is related to psychotic symptoms and could guide further therapies for improving social functioning in patients with schizophrenia. Copyright © 2015 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Howatson, Glyn; Brandon, Raphael; Hunter, Angus M
2016-04-01
There is a great deal of research on the responses to resistance training; however, information on the responses to strength and power training conducted by elite strength and power athletes is sparse. To establish the acute and 24-h neuromuscular and kinematic responses to Olympic-style barbell strength and power exercise in elite athletes. Ten elite track and field athletes completed a series of 3 back-squat exercises each consisting of 4 × 5 repetitions. These were done as either strength or power sessions on separate days. Surface electromyography (sEMG), bar velocity, and knee angle were monitored throughout these exercises and maximal voluntary contraction (MVC), jump height, central activation ratio (CAR), and lactate were measured pre, post, and 24 h thereafter. Repetition duration, impulse, and total work were greater (P < .01) during strength sessions, with mean power being greater (P < .01) after the power sessions. Lactate increased (P < .01) after strength but not power sessions. sEMG increased (P < .01) across sets for both sessions, with the strength session increasing at a faster rate (P < .01) and with greater activation (P < .01) by the end of the final set. MVC declined (P < .01) after the strength and not the power session, which remained suppressed (P < .05) 24 h later, whereas CAR and jump height remained unchanged. A greater neuromuscular and metabolic demand after the strength and not power session is evident in elite athletes, which impaired maximal-force production for up to 24 h. This is an important consideration for planning concurrent athlete training.
Power Efficient Hardware Architecture of SHA-1 Algorithm for Trusted Mobile Computing
NASA Astrophysics Data System (ADS)
Kim, Mooseop; Ryou, Jaecheol
The Trusted Mobile Platform (TMP) is developed and promoted by the Trusted Computing Group (TCG), which is an industry standard body to enhance the security of the mobile computing environment. The built-in SHA-1 engine in TMP is one of the most important circuit blocks and contributes the performance of the whole platform because it is used as key primitives supporting platform integrity and command authentication. Mobile platforms have very stringent limitations with respect to available power, physical circuit area, and cost. Therefore special architecture and design methods for low power SHA-1 circuit are required. In this paper, we present a novel and efficient hardware architecture of low power SHA-1 design for TMP. Our low power SHA-1 hardware can compute 512-bit data block using less than 7,000 gates and has a power consumption about 1.1 mA on a 0.25μm CMOS process.
NASA Technical Reports Server (NTRS)
Mckee, James W.
1990-01-01
This volume (2 of 4) contains the specification, structured flow charts, and code listing for the protocol. The purpose of an autonomous power system on a spacecraft is to relieve humans from having to continuously monitor and control the generation, storage, and distribution of power in the craft. This implies that algorithms will have been developed to monitor and control the power system. The power system will contain computers on which the algorithms run. There should be one control computer system that makes the high level decisions and sends commands to and receive data from the other distributed computers. This will require a communications network and an efficient protocol by which the computers will communicate. One of the major requirements on the protocol is that it be real time because of the need to control the power elements.
NASA Astrophysics Data System (ADS)
Langbein, J. O.
2016-12-01
Most time series of geophysical phenomena are contaminated with temporally correlated errors that limit the precision of any derived parameters. Ignoring temporal correlations will result in biased and unrealistic estimates of velocity and its error estimated from geodetic position measurements. Obtaining better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model when there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fn , with frequency, f. Time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. [2012] demonstrate one technique that substantially increases the efficiency of the MLE methods, but it provides only an approximate solution for power-law indices greater than 1.0. That restriction can be removed by simply forming a data-filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified and it provides robust results for a wide range of power-law indices. With the new formulation, the efficiency is typically improved by about a factor of 8 over previous MLE algorithms [Langbein, 2004]. The new algorithm can be downloaded at http://earthquake.usgs.gov/research/software/#est_noise. The main program provides a number of basic functions that can be used to model the time-dependent part of time series and a variety of models that describe the temporal covariance of the data. In addition, the program is packaged with a few companion programs and scripts that can help with data analysis and with interpretation of the noise modeling.
Direct Energy Conversion for Nuclear Propulsion at Low Specific Mass
NASA Technical Reports Server (NTRS)
Scott, John H.
2014-01-01
The project will continue the FY13 JSC IR&D (October-2012 to September-2013) effort in Travelling Wave Direct Energy Conversion (TWDEC) in order to demonstrate its potential as the core of a high potential, game-changing, in-space propulsion technology. The TWDEC concept converts particle beam energy into radio frequency (RF) alternating current electrical power, such as can be used to heat the propellant in a plasma thruster. In a more advanced concept (explored in the Phase 1 NIAC project), the TWDEC could also be utilized to condition the particle beam such that it may transfer directed kinetic energy to a target propellant plasma for the purpose of increasing thrust and optimizing the specific impulse. The overall scope of the FY13 first-year effort was to build on both the 2012 Phase 1 NIAC research and the analysis and test results produced by Japanese researchers over the past twenty years to assess the potential for spacecraft propulsion applications. The primary objective of the FY13 effort was to create particle-in-cell computer simulations of a TWDEC. Other objectives included construction of a breadboard TWDEC test article, preliminary test calibration of the simulations, and construction of first order power system models to feed into mission architecture analyses with COPERNICUS tools. Due to funding cuts resulting from the FY13 sequestration, only the computer simulations and assembly of the breadboard test article were completed. The simulations, however, are of unprecedented flexibility and precision and were presented at the 2013 AIAA Joint Propulsion Conference. Also, the assembled test article will provide an ion current density two orders of magnitude above that available in previous Japanese experiments, thus enabling the first direct measurements of power generation from a TWDEC for FY14. The proposed FY14 effort will use the test article for experimental validation of the computer simulations and thus complete to a greater fidelity the mission analysis products originally conceived for FY13.
Changing computing paradigms towards power efficiency
Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro
2014-01-01
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033
Lepore, Natasha; Brun, Caroline A; Chiang, Ming-Chang; Chou, Yi-Yu; Dutton, Rebecca A; Hayashi, Kiralee M; Lopez, Oscar L; Aizenstein, Howard J; Toga, Arthur W; Becker, James T; Thompson, Paul M
2006-01-01
Tensor-based morphometry (TBM) is widely used in computational anatomy as a means to understand shape variation between structural brain images. A 3D nonlinear registration technique is typically used to align all brain images to a common neuroanatomical template, and the deformation fields are analyzed statistically to identify group differences in anatomy. However, the differences are usually computed solely from the determinants of the Jacobian matrices that are associated with the deformation fields computed by the registration procedure. Thus, much of the information contained within those matrices gets thrown out in the process. Only the magnitude of the expansions or contractions is examined, while the anisotropy and directional components of the changes are ignored. Here we remedy this problem by computing multivariate shape change statistics using the strain matrices. As the latter do not form a vector space, means and covariances are computed on the manifold of positive-definite matrices to which they belong. We study the brain morphology of 26 HIV/AIDS patients and 14 matched healthy control subjects using our method. The images are registered using a high-dimensional 3D fluid registration algorithm, which optimizes the Jensen-Rényi divergence, an information-theoretic measure of image correspondence. The anisotropy of the deformation is then computed. We apply a manifold version of Hotelling's T2 test to the strain matrices. Our results complement those found from the determinants of the Jacobians alone and provide greater power in detecting group differences in brain structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Hui; Chou, Dean-Yi, E-mail: chou@phys.nthu.edu.tw
The solar acoustic waves are modified by the interaction with sunspots. The interaction can be treated as a scattering problem: an incident wave propagating toward a sunspot is scattered by the sunspot into different modes. The absorption cross section and scattering cross section are two important parameters in the scattering problem. In this study, we use the wavefunction of the scattered wave, measured with a deconvolution method, to compute the absorption cross section σ {sub ab} and the scattering cross section σ {sub sc} for the radial order n = 0–5 for two sunspots, NOAA 11084 and NOAA 11092. Inmore » the computation of the cross sections, the random noise and dissipation in the measured acoustic power are corrected. For both σ {sub ab} and σ {sub sc}, the value of NOAA 11092 is greater than that of NOAA 11084, but their overall n dependence is similar: decreasing with n . The ratio of σ {sub ab} of NOAA 11092 to that of NOAA 11084 approximately equals the ratio of sunspot radii for all n , while the ratio of σ {sub sc} of the two sunspots is greater than the ratio of sunspot radii and increases with n . This suggests that σ {sub ab} is approximately proportional to the sunspot radius, while the dependence of σ {sub sc} on radius is faster than the linear increase.« less
NASA Astrophysics Data System (ADS)
Rodriguez, Sarah L.; Lehman, Kathleen
2017-10-01
This theoretical paper explores the need for enhanced, intersectional computing identity theory for the purpose of developing a diverse group of computer scientists for the future. Greater theoretical understanding of the identity formation process specifically for computing is needed in order to understand how students come to understand themselves as computer scientists. To ensure that the next generation of computer scientists is diverse, this paper presents a case for examining identity development intersectionally, understanding the ways in which women and underrepresented students may have difficulty identifying as computer scientists and be systematically oppressed in their pursuit of computer science careers. Through a review of the available scholarship, this paper suggests that creating greater theoretical understanding of the computing identity development process will inform the way in which educational stakeholders consider computer science practices and policies.
Power-constrained supercomputing
NASA Astrophysics Data System (ADS)
Bailey, Peter E.
As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.
Teach Graphic Design Basics with PowerPoint
ERIC Educational Resources Information Center
Lazaros, Edward J.; Spotts, Thomas H.
2007-01-01
While PowerPoint is generally regarded as simply software for creating slide presentations, it includes often overlooked--but powerful--drawing tools. Because it is part of the Microsoft Office package, PowerPoint comes preloaded on many computers and thus is already available in many classrooms. Since most computers are not preloaded with good…
NASA Technical Reports Server (NTRS)
Fegley, K. A.; Hayden, J. H.; Rehmann, D. W.
1974-01-01
The feasibility of formulating a methodology for the modeling and analysis of aerospace electrical power processing systems is investigated. It is shown that a digital computer may be used in an interactive mode for the design, modeling, analysis, and comparison of power processing systems.
Fatigue life of high-speed ball bearings with silicon nitride balls
NASA Technical Reports Server (NTRS)
Parker, R. J.; Zaretsky, E. V.
1974-01-01
Hot-pressed silicon nitride was evaluated as a rolling-element bearing material. The five-ball fatigue tester was used to test 12.7-mm- diameter silicon nitride balls at maximum Hertz stresses ranging from 4.27 x 10 to the 9th power n/sq m to 6.21 x 10 to the 9th power n/sq m at a race temperature of 328K. The fatigue life of NC-132 hot-pressed silicon nitride was found to be equal to typical bearing steels and much greater than other ceramic or cermet materials at the same stress levels. A digital computer program was used to predict the fatigue life of 120-mm- bore angular-contact ball bearings containing either steel or silicon nitride balls. The analysis indicates that there is no improvement in the lives of bearings of the same geometry operating at DN values from 2 to 4 million where silicon nitride balls are used in place of steel balls.
A Foldable Lithium-Sulfur Battery.
Li, Lu; Wu, Zi Ping; Sun, Hao; Chen, Deming; Gao, Jian; Suresh, Shravan; Chow, Philippe; Singh, Chandra Veer; Koratkar, Nikhil
2015-11-24
The next generation of deformable and shape-conformable electronics devices will need to be powered by batteries that are not only flexible but also foldable. Here we report a foldable lithium-sulfur (Li-S) rechargeable battery, with the highest areal capacity (∼3 mAh cm(-2)) reported to date among all types of foldable energy-storage devices. The key to this result lies in the use of fully foldable and superelastic carbon nanotube current-collector films and impregnation of the active materials (S and Li) into the current-collectors in a checkerboard pattern, enabling the battery to be folded along two mutually orthogonal directions. The carbon nanotube films also serve as the sulfur entrapment layer in the Li-S battery. The foldable battery showed <12% loss in specific capacity over 100 continuous folding and unfolding cycles. Such shape-conformable Li-S batteries with significantly greater energy density than traditional lithium-ion batteries could power the flexible and foldable devices of the future including laptops, cell phones, tablet computers, surgical tools, and implantable biomedical devices.
NASA Technical Reports Server (NTRS)
1980-01-01
Computer simulations and laboratory tests were used to evaluate the hazard posed by lightning flashes to ground on the Solar Power Satellite rectenna and to make recommendations on a lightning protection system for the rectenna. The distribution of lightning over the lower 48 of the continental United States was determined, as were the interactions of lightning with the rectenna and the modes in which those interactions could damage the rectenna. Lightning protection was both required and feasible. Several systems of lightning protection were considered and evaluated. These included two systems that employed lightning rods of different lengths and placed on top of the rectenna's billboards and a third, distribution companies; it consists of short lightning rods all along the length of each billboard that are connected by a horizontal wire above the billboard. The distributed lightning protection system afforded greater protection than the other systems considered and was easier to integrate into the rectenna's structural design.
Efficient Calculation of Exact Exchange Within the Quantum Espresso Software Package
NASA Astrophysics Data System (ADS)
Barnes, Taylor; Kurth, Thorsten; Carrier, Pierre; Wichmann, Nathan; Prendergast, David; Kent, Paul; Deslippe, Jack
Accurate simulation of condensed matter at the nanoscale requires careful treatment of the exchange interaction between electrons. In the context of plane-wave DFT, these interactions are typically represented through the use of approximate functionals. Greater accuracy can often be obtained through the use of functionals that incorporate some fraction of exact exchange; however, evaluation of the exact exchange potential is often prohibitively expensive. We present an improved algorithm for the parallel computation of exact exchange in Quantum Espresso, an open-source software package for plane-wave DFT simulation. Through the use of aggressive load balancing and on-the-fly transformation of internal data structures, our code exhibits speedups of approximately an order of magnitude for practical calculations. Additional optimizations are presented targeting the many-core Intel Xeon-Phi ``Knights Landing'' architecture, which largely powers NERSC's new Cori system. We demonstrate the successful application of the code to difficult problems, including simulation of water at a platinum interface and computation of the X-ray absorption spectra of transition metal oxides.
Stability basin estimates fall risk from observed kinematics, demonstrated on the Sit-to-Stand task.
Shia, Victor; Moore, Talia Yuki; Holmes, Patrick; Bajcsy, Ruzena; Vasudevan, Ram
2018-04-27
The ability to quantitatively measure stability is essential to ensuring the safety of locomoting systems. While the response to perturbation directly reflects the stability of a motion, this experimental method puts human subjects at risk. Unfortunately, existing indirect methods for estimating stability from unperturbed motion have been shown to have limited predictive power. This paper leverages recent advances in dynamical systems theory to accurately estimate the stability of human motion without requiring perturbation. This approach relies on kinematic observations of a nominal Sit-to-Stand motion to construct an individual-specific dynamic model, input bounds, and feedback control that are then used to compute the set of perturbations from which the model can recover. This set, referred to as the stability basin, was computed for 14 individuals, and was able to successfully differentiate between less and more stable Sit-to-Stand strategies for each individual with greater accuracy than existing methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Zajd, Henry J.
2007-01-01
The need for accurate real-time discharge in the International Niagara River hydro power system requires reliable, accurate and reproducible data. The U.S. Geological Survey has been widely using Acoustic Doppler Current Profilers (ADCP) to accurately measure discharge in riverine channels since the mid-1990s. The use of the ADCP to measure discharge has remained largely untested at hydroelectric-generation facilities such as the New York Power Authority's (NYPA) Niagara Power Project in Niagara Falls, N.Y. This facility has a large, engineered diversion channel with the capacity of high volume discharges in excess of 100,000 cubic feet per second (ft3/s). Facilities such as this could benefit from the use of an ADCP, if the ADCP discharge measurements prove to be more time effective and accurate than those obtained from the flow-calculation techniques that are currently used. Measurements of diversion flow by an ADCP in the 'Pant Leg' diversion channel at the Niagara Power Project were made on November 6, 7, and 8, 2006, and compared favorably (within 1 percent) with those obtained concurrently by a conventional Price-AA current-meter measurement during one of the ADCP measurement sessions. The mean discharge recorded during each 2-hour individual ADCP measurement session compared favorably with (3.5 to 6.8 percent greater than) the discharge values computed by the flow-calculation method presently in use by NYPA. The use of ADCP technology to measure discharge could ultimately permit increased power-generation efficiency at the NYPA Niagara Falls Power Project by providing improved predictions of the amount of water (and thus the power output) available.
Heterotic computing: exploiting hybrid computational devices.
Kendon, Viv; Sebald, Angelika; Stepney, Susan
2015-07-28
Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
77 FR 11107 - Combined Notice of Filings #2
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-24
...-003. Applicants: Kansas City Power & Light Company, KCP&L Greater Missouri Operations Company. Description: Supplemental Information of Kansas City Power & Light Company and KCP&L Greater Missouri...(a)(2)(iii: Second Amended and Restated Participation Agreement and 230kV Attachment Agreement to be...
Peyman, A; Khalid, M; Calderon, C; Addison, D; Mee, T; Maslanyj, M; Mann, S
2011-06-01
Laboratory measurements have been carried out with examples of Wi-Fi devices used in UK schools to evaluate the radiofrequency power densities around them and the total emitted powers. Unlike previous studies, a 20 MHz bandwidth signal analyzer was used, enabling the whole Wi-Fi signal to be captured and monitored. The radiation patterns of the laptops had certain similarities, including a minimum toward the torso of the user and two maxima symmetrically opposed across a vertical plane bisecting the screen and keyboard. The maxima would have resulted from separate antennas mounted behind the top left and right corners of the laptop screens. The patterns for access points were more symmetrical with generally higher power densities at a given distance. The spherically-integrated radiated power (IRP) ranged from 5 to 17 mW for 15 laptops in the 2.45 GHz band and from 1 to 16 mW for eight laptops in the 5 GHz band. For practical reasons and because access points are generally wall-mounted with beams directed into the rooms, their powers were integrated over a hemisphere. These ranged from 3 to 28 mW for 12 access points at 2.4 GHz and from 3 to 29 mW for six access points at 5 GHz. In addition to the spherical measurements of IRP, power densities were measured at distances of 0.5 m and greater from the devices, and consistent with the low radiated powers, these were all much lower than the ICNIRP reference level.
Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.
1980-01-01
Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.
Expression Templates for Truncated Power Series
NASA Astrophysics Data System (ADS)
Cary, John R.; Shasharina, Svetlana G.
1997-05-01
Truncated power series are used extensively in accelerator transport modeling for rapid tracking and analysis of nonlinearity. Such mathematical objects are naturally represented computationally as objects in C++. This is more intuitive and produces more transparent code through operator overloading. However, C++ object use often comes with a computational speed loss due, e.g., to the creation of temporaries. We have developed a subset of truncated power series expression templates(http://monet.uwaterloo.ca/blitz/). Such expression templates use the powerful template processing facility of C++ to combine complicated expressions into series operations that exectute more rapidly. We compare computational speeds with existing truncated power series libraries.
Systems and methods for rapid processing and storage of data
Stalzer, Mark A.
2017-01-24
Systems and methods of building massively parallel computing systems using low power computing complexes in accordance with embodiments of the invention are disclosed. A massively parallel computing system in accordance with one embodiment of the invention includes at least one Solid State Blade configured to communicate via a high performance network fabric. In addition, each Solid State Blade includes a processor configured to communicate with a plurality of low power computing complexes interconnected by a router, and each low power computing complex includes at least one general processing core, an accelerator, an I/O interface, and cache memory and is configured to communicate with non-volatile solid state memory.
NASA Astrophysics Data System (ADS)
Jean-Jumeau, Rene
1993-03-01
Voltage collapse (VC) is generally caused by either of two types of system disturbances: load variations and contingencies. In this thesis, we study VC resulting from load variations. This is termed static voltage collapse. This thesis deals with this type of voltage collapse in electrical power systems by using a stationary bifurcations viewpoint by associating it with the occurrence of saddle node bifurcations (SNB) in the system. Approximate models are generically used in most VC analyses. We consider the validity of these models for the study of SNB and, thus, of voltage collapse. We justify the use of saddle node bifurcation as a model for VC in power systems. In particular, we prove that this leads to definition of a model and--since load demand is used as a parameter for that model--of a mode of parameterization of that model in order to represent actual power demand variations within the power system network. Ill-conditioning of the set of nonlinear equations defining a dynamical system is a generic occurence near the SNB point. We suggest a reparameterization of the set of nonlinear equations which allows to avoid this problem. A new indicator for the proximity of voltage collapse, the voltage collapse index (VCI), is developed. A new (n + 1)-dimensional set of characteristic equations for the computation of the exact SNB point, replacing the standard (2n + 1)-dimensional one is presented for general parameter -dependent nonlinear dynamical systems. These results are then applied to electric power systems for the analysis and prediction of voltage collapse. The new methods offer the potential of faster computation and greater flexibility. For reasons of theoretical development and clarity, the preceding methodologies are developed under the assumption of the absence of constraints on the system parameters and states, and the full differentiability of the functions defining the power system model. In the latter part of this thesis, we relax these assumptions in order to develop a framework and new formulation for application of the tools previously developed for the analysis and prediction of voltage collapse in practical power system models which include numerous constraints and discontinuities. Illustrations and numerical simulations throughout the thesis support our results.
A comparison of peak power in the shoulder press and shoulder throw.
Dalziel, W M; Neal, R J; Watts, M C
2002-09-01
The ability to generate peak power is central for performance in many sports. Currently two distinct resistance training methods are used to develop peak power, the heavy weight/slow velocity and light weight/fast velocity regimes. When using the light weight/fast velocity power training method it was proposed that peak power would be greater in a shoulder throw exercise compared with a normal shoulder press. Nine males performed three lifts in the shoulder press and shoulder throw at 30% and 40% of their one repetition maximum (1RM). These lifts were performed identically, except for the release of the bar in the throw condition. A potentiometer attached to the bar measured displacement and duration of the lifts. The time of bar release in the shoulder throw was determined with a pressure switch. ANOVA was used to examine statistically significant differences where the level of acceptance was set at p < 0.05. Peak power was found to be significantly greater in the shoulder throw at 30% of 1 RM condition [F, (1, 23) = 2.325 p < 0.051 and at 40% of 1 RM [F, (1, 23) = 2.905 p < 0.05] compared to values recorded for the respective shoulder presses. Peak power was also greater in the 30% of 1 RM shoulder throw (510 +/- 103W) than in the 40% of 1 RM shoulder press (471 +/- 96W). Peak power was produced significantly later in the shoulder throw versus the shoulder press. This differing power reflected a greater bar velocity of the shoulder throw at both assigned weights compared with the shoulder press.
A dc model for power switching transistors suitable for computer-aided design and analysis
NASA Technical Reports Server (NTRS)
Wilson, P. M.; George, R. T., Jr.; Owen, H. A., Jr.; Wilson, T. G.
1979-01-01
The proposed dc model for bipolar junction power switching transistors is based on measurements which may be made with standard laboratory equipment. Those nonlinearities which are of importance to power electronics design are emphasized. Measurements procedures are discussed in detail. A model formulation adapted for use with a computer program is presented, and a comparison between actual and computer-generated results is made.
Changing computing paradigms towards power efficiency.
Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro
2014-06-28
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Group Velocity Dispersion Curves from Wigner-Ville Distributions
NASA Astrophysics Data System (ADS)
Lloyd, Simon; Bokelmann, Goetz; Sucic, Victor
2013-04-01
With the widespread adoption of ambient noise tomography, and the increasing number of local earthquakes recorded worldwide due to dense seismic networks and many very dense temporary experiments, we consider it worthwhile to evaluate alternative Methods to measure surface wave group velocity dispersions curves. Moreover, the increased computing power of even a simple desktop computer makes it feasible to routinely use methods other than the typically employed multiple filtering technique (MFT). To that end we perform tests with synthetic and observed seismograms using the Wigner-Ville distribution (WVD) frequency time analysis, and compare dispersion curves measured with WVD and MFT with each other. Initial results suggest WVD to be at least as good as MFT at measuring dispersion, albeit at a greater computational expense. We therefore need to investigate if, and under which circumstances, WVD yields better dispersion curves than MFT, before considering routinely applying the method. As both MFT and WVD generally work well for teleseismic events and at longer periods, we explore how well the WVD method performs at shorter periods and for local events with smaller epicentral distances. Such dispersion information could potentially be beneficial for improving velocity structure resolution within the crust.
NASA Astrophysics Data System (ADS)
Hut, Rolf; Amisigo, Barnabas A.; Steele-Dunne, Susan; van de Giesen, Nick
2015-12-01
Reduction of Used Memory Ensemble Kalman Filtering (RumEnKF) is introduced as a variant on the Ensemble Kalman Filter (EnKF). RumEnKF differs from EnKF in that it does not store the entire ensemble, but rather only saves the first two moments of the ensemble distribution. In this way, the number of ensemble members that can be calculated is less dependent on available memory, and mainly on available computing power (CPU). RumEnKF is developed to make optimal use of current generation super computer architecture, where the number of available floating point operations (flops) increases more rapidly than the available memory and where inter-node communication can quickly become a bottleneck. RumEnKF reduces the used memory compared to the EnKF when the number of ensemble members is greater than half the number of state variables. In this paper, three simple models are used (auto-regressive, low dimensional Lorenz and high dimensional Lorenz) to show that RumEnKF performs similarly to the EnKF. Furthermore, it is also shown that increasing the ensemble size has a similar impact on the estimation error from the three algorithms.
Lee, Cheens; Robinson, Kerin M; Wendt, Kate; Williamson, Dianne
The unimpeded functioning of hospital Health Information Services (HIS) is essential for patient care, clinical governance, organisational performance measurement, funding and research. In an investigation of hospital Health Information Services' preparedness for internal disasters, all hospitals in the state of Victoria with the following characteristics were surveyed: they have a Health Information Service/ Department; there is a Manager of the Health Information Service/Department; and their inpatient capacity is greater than 80 beds. Fifty percent of the respondents have experienced an internal disaster within the past decade, the majority affecting the Health Information Service. The most commonly occurring internal disasters were computer system failure and floods. Two-thirds of the hospitals have internal disaster plans; the most frequently occurring scenarios provided for are computer system failure, power failure and fire. More large hospitals have established back-up systems than medium- and small-size hospitals. Fifty-three percent of hospitals have a recovery plan for internal disasters. Hospitals typically self-rate as having a 'medium' level of internal disaster preparedness. Overall, large hospitals are better prepared for internal disasters than medium and small hospitals, and preparation for disruption of computer systems and medical record services is relatively high on their agendas.
Einstein-Home search for periodic gravitational waves in early S5 LIGO data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, B. P.; Abbott, R.; Adhikari, R.
This paper reports on an all-sky search for periodic gravitational waves from sources such as deformed isolated rapidly spinning neutron stars. The analysis uses 840 hours of data from 66 days of the fifth LIGO science run (S5). The data were searched for quasimonochromatic waves with frequencies f in the range from 50 to 1500 Hz, with a linear frequency drift f (measured at the solar system barycenter) in the range -f/{tau}
Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment
ERIC Educational Resources Information Center
Lin, Jing-Wen
2016-01-01
This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using…
The influence of partner drug use and relationship power on treatment engagement.
Riehman, Kara S; Iguchi, Martin Y; Zeller, Michelle; Morral, Andrew R
2003-05-01
Substance-using intimate partners negatively influence individuals' substance abuse treatment engagement and drug use, but little else is known about effects of intimate relationships on treatment. We examine how relationship dynamics (power, control, dependence, insecurity and decision-making power) influence treatment engagement, and whether this differs by gender and partner drug use. Sixty-four heroin users (42 men, 22 women) receiving methadone detoxification treatment in Los Angeles were interviewed at treatment entry and submitted daily diaries of drug use throughout the 21-day treatment. Total number of reported heroin-free days in the first eight treatment days was the dependent variable. Bivariate analyses revealed, that compared to men, women were more likely to have substance-using partners, reported greater power over a partner and greater household decision-making power in their relationships. Multivariate analysis indicated that individuals whose partners had more control over them reported fewer days abstinent. Among individuals with heroin-using partners, greater household decision-making power was associated with more days abstinent, but there was no association for individuals with non-using partners. Relationship power dynamics may be important influences on the treatment process, and some dimensions of power may interact with partner drug use status.
An optimal adder-based hardware architecture for the DCT/SA-DCT
NASA Astrophysics Data System (ADS)
Kinane, Andrew; Muresan, Valentin; O'Connor, Noel
2005-07-01
The explosive growth of the mobile multimedia industry has accentuated the need for ecient VLSI implemen- tations of the associated computationally demanding signal processing algorithms. This need becomes greater as end-users demand increasingly enhanced features and more advanced underpinning video analysis. One such feature is object-based video processing as supported by MPEG-4 core profile, which allows content-based in- teractivity. MPEG-4 has many computationally demanding underlying algorithms, an example of which is the Shape Adaptive Discrete Cosine Transform (SA-DCT). The dynamic nature of the SA-DCT processing steps pose significant VLSI implementation challenges and many of the previously proposed approaches use area and power consumptive multipliers. Most also ignore the subtleties of the packing steps and manipulation of the shape information. We propose a new multiplier-less serial datapath based solely on adders and multiplexers to improve area and power. The adder cost is minimised by employing resource re-use methods. The number of (physical) adders used has been derived using a common sub-expression elimination algorithm. Additional energy eciency is factored into the design by employing guarded evaluation and local clock gating. Our design implements the SA-DCT packing with minimal switching using ecient addressing logic with a transpose mem- ory RAM. The entire design has been synthesized using TSMC 0.09µm TCBN90LP technology yielding a gate count of 12028 for the datapath and its control logic.
HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation
NASA Technical Reports Server (NTRS)
Sterling, Thomas; Bergman, Larry
2000-01-01
Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.
Wave Engine Topping Cycle Assessment
NASA Technical Reports Server (NTRS)
Welch, Gerard E.
1996-01-01
The performance benefits derived by topping a gas turbine engine with a wave engine are assessed. The wave engine is a wave rotor that produces shaft power by exploiting gas dynamic energy exchange and flow turning. The wave engine is added to the baseline turboshaft engine while keeping high-pressure-turbine inlet conditions, compressor pressure ratio, engine mass flow rate, and cooling flow fractions fixed. Related work has focused on topping with pressure-exchangers (i.e., wave rotors that provide pressure gain with zero net shaft power output); however, more energy can be added to a wave-engine-topped cycle leading to greater engine specific-power-enhancement The energy addition occurs at a lower pressure in the wave-engine-topped cycle; thus the specific-fuel-consumption-enhancement effected by ideal wave engine topping is slightly lower than that effected by ideal pressure-exchanger topping. At a component level, however, flow turning affords the wave engine a degree-of-freedom relative to the pressure-exchanger that enables a more efficient match with the baseline engine. In some cases, therefore, the SFC-enhancement by wave engine topping is greater than that by pressure-exchanger topping. An ideal wave-rotor-characteristic is used to identify key wave engine design parameters and to contrast the wave engine and pressure-exchanger topping approaches. An aerodynamic design procedure is described in which wave engine design-point performance levels are computed using a one-dimensional wave rotor model. Wave engines using various wave cycles are considered including two-port cycles with on-rotor combustion (valved-combustors) and reverse-flow and through-flow four-port cycles with heat addition in conventional burners. A through-flow wave cycle design with symmetric blading is used to assess engine performance benefits. The wave-engine-topped turboshaft engine produces 16% more power than does a pressure-exchanger-topped engine under the specified topping constraints. Positive and negative aspects of wave engine topping in gas turbine engines are identified.
DET/MPS - The GSFC Energy Balance Programs
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
Direct Energy Transfer (DET) and MultiMission Spacecraft Modular Power System (MPS) computer programs perform mathematical modeling and simulation to aid in design and analysis of DET and MPS spacecraft power system performance in order to determine energy balance of subsystem. DET spacecraft power system feeds output of solar photovoltaic array and nickel cadmium batteries directly to spacecraft bus. MPS system, Standard Power Regulator Unit (SPRU) utilized to operate array at array's peak power point. DET and MPS perform minute-by-minute simulation of performance of power system. Results of simulation focus mainly on output of solar array and characteristics of batteries. Both packages limited in terms of orbital mechanics, they have sufficient capability to calculate data on eclipses and performance of arrays for circular or near-circular orbits. DET and MPS written in FORTRAN-77 with some VAX FORTRAN-type extensions. Both available in three versions: GSC-13374, for DEC VAX-series computers running VMS. GSC-13443, for UNIX-based computers. GSC-13444, for Apple Macintosh computers.
NASA Astrophysics Data System (ADS)
Stockton, Gregory R.
2011-05-01
Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.
Computer memory power control for the Galileo spacecraft
NASA Technical Reports Server (NTRS)
Detwiler, R. C.
1983-01-01
The developmental history, major design drives, and final topology of the computer memory power system on the Galileo spacecraft are described. A unique method of generating memory backup power directly from the fault current drawn during a spacecraft power overload or fault condition allows this system to provide continuous memory power. This concept provides a unique solution to the problem of volatile memory loss without the use of a battery of other large energy storage elements usually associated with uninterrupted power supply designs.
77 FR 14510 - Combined Notice of Filings #2
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-12
...-2097-003. Applicants: Kansas City Power & Light Company, KCP&L Greater Missouri Operations Company. Description: Supplement to Change-in-Status Filing of Kansas City Power & Light Company and KCP&L Greater...Corp. Description: Cancellation of Alpental Blue Mountain E&P Agreement to be effective 5/6/2012. Filed...
47 CFR 15.709 - General technical requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... between the transmitter and the antenna. If transmitting antennas of directional gain greater than 6 dBi... gain of the antenna exceeds 6 dBi. (2) For personal/portable TVBDs, the maximum conducted output power... conducted output power shall not exceed 40 milliwatts. If transmitting antennas of directional gain greater...
75 FR 52521 - Combined Notice of Filings #1
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-26
...; ER09-304-004. Applicants: Kansas City Power & Light Company, KCP&L Greater Missouri Operations Company. Description: Kansas City Power & Light Company and KCP&L Greater Missouri Operations Company Notice of Non... Agreement 1647, LGIA for SCE and Desert Sunlight to be effective 8/10/ 2010. Filed Date: 08/19/2010...
Mechanical energy and power flow analysis of wheelchair use with different camber settings.
Huang, Yueh-Chu; Guo, Lan-Yuen; Tsai, Chung-Ying; Su, Fong-Chin
2013-04-01
It has been suggested that minimisation of energy cost is one of the primary determinants of wheelchair designs. Wheel camber is one important parameter related to wheelchair design and its angle may affect usability during manual propulsion. However, there is little available literature addressing the effect of wheel camber on the mechanical energy or power flow involved in manual wheelchair propulsion. Twelve normal subjects (mean age, 22.3 years; SD, 1.6 years) participated in this study. A video-tracking system and an instrumented wheel were used to collect 3D kinematic and kinetic data. Wheel camber of 0° and 15° was chosen to examine the difference between mechanical power and power flow of the upper extremity during manual wheelchair propulsion. The work calculated from power flow and the discrepancy between the mechanical work and power flow work of upper extremity had significantly greater values with increased camber. The upper arm had a larger active muscle power compared with that in the forearm and hand segments. While propelling the increased camber, the magnitude of both the proximal and distal joint power and proximal muscle power was increased in all three segments. While the propelling wheel with camber not only needs a greater energy cost but also there is greater energy loss.
NASA Astrophysics Data System (ADS)
Orhan, Kadir; Mayerle, Roberto
2017-04-01
Climate change is an urgent and potentially irreversible threat to human societies and the planet and thus requires an effective and appropriate response, with a view to accelerating the reduction of global greenhouse gas emissions. At this point, a worldwide shift to renewable energy is crucial. In this study, a methodology comprising of the estimates of power yield, evaluation of the effects of power extraction on flow conditions, and near-field investigations to deliver wake characteristics, recovery and interactions is described and applied to several straits in Indonesia. Site selection is done with high-resolution, three-dimensional flow models providing sufficient spatiotemporal coverage. Much attention has been given to the meteorological forcing, and conditions at the open sea boundaries to adequately capture the density gradients and flow fields. Model verifications using tidal records show excellent agreement. Sites with adequate depth for the energy conversion using horizontal axis tidal turbines, average kinetic power density greater than 0.5 kW/m2, and surface area larger than 0.5km2 are defined as energy hotspots. Spatial variation of the average extractable electric power is determined, and annual tidal energy resource is estimated for the straits in question. The results showed that the potential for tidal power generation in Indonesia is likely to exceed previous predictions reaching around 4,800MW. Models with higher resolutions have been developed to assess the impacts of devices on flow conditions and to resolve near-field turbine wakes in greater detail. The energy is assumed to be removed uniformly by sub-grid scale arrays of turbines. An additional drag force resulting in dissipation of the pre-existing kinetic power from 10% to 60% within a flow cross-section is introduced to capture the impacts. k-ɛ model, which is a second order turbulence closure model is selected to involve the effects of the turbulent kinetic energy and turbulent kinetic energy dissipation. Preliminary results show the effectiveness of the method to capture the effects of power extraction, and wake characteristics and recovery reasonably well with low computational cost. It was found that although there is no significant change regarding water levels, an impact has been observed on current velocities as a result of velocity profile adjusting to the increased momentum transfer. It was also seen that, depending on the level of energy dissipation, currently recommended tidal farm configurations can be conservative regarding the spacing of the tidal turbines.
Grid Computing in K-12 Schools. Soapbox Digest. Volume 3, Number 2, Fall 2004
ERIC Educational Resources Information Center
AEL, 2004
2004-01-01
Grid computing allows large groups of computers (either in a lab, or remote and connected only by the Internet) to extend extra processing power to each individual computer to work on components of a complex request. Grid middleware, recognizing priorities set by systems administrators, allows the grid to identify and use this power without…
Computing the Feasible Spaces of Optimal Power Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Computing the Feasible Spaces of Optimal Power Flow Problems
Molzahn, Daniel K.
2017-03-15
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Evaluation of the Lattice-Boltzmann Equation Solver PowerFLOW for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Lockard, David P.; Luo, Li-Shi; Singer, Bart A.; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
A careful comparison of the performance of a commercially available Lattice-Boltzmann Equation solver (Power-FLOW) was made with a conventional, block-structured computational fluid-dynamics code (CFL3D) for the flow over a two-dimensional NACA-0012 airfoil. The results suggest that the version of PowerFLOW used in the investigation produced solutions with large errors in the computed flow field; these errors are attributed to inadequate resolution of the boundary layer for reasons related to grid resolution and primitive turbulence modeling. The requirement of square grid cells in the PowerFLOW calculations limited the number of points that could be used to span the boundary layer on the wing and still keep the computation size small enough to fit on the available computers. Although not discussed in detail, disappointing results were also obtained with PowerFLOW for a cavity flow and for the flow around a generic helicopter configuration.
Collaborative Autonomous Unmanned Aerial - Ground Vehicle Systems for Field Operations
2007-08-31
very limited payload capabilities of small UVs, sacrificing minimal computational power and run time, adhering at the same time to the low cost...configuration has been chosen because of its high computational capabilities, low power consumption, multiple I/O ports, size, low heat emission and cost. This...due to their high power to weight ratio, small packaging, and wide operating temperatures. Power distribution is controlled by the 120 Watt ATX power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brun, B.
1997-07-01
Computer technology has improved tremendously during the last years with larger media capacity, memory and more computational power. Visual computing with high-performance graphic interface and desktop computational power have changed the way engineers accomplish everyday tasks, development and safety studies analysis. The emergence of parallel computing will permit simulation over a larger domain. In addition, new development methods, languages and tools have appeared in the last several years.
NASA Astrophysics Data System (ADS)
Valasek, Lukas; Glasa, Jan
2017-12-01
Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.
NASA Technical Reports Server (NTRS)
Kimble, Michael C.; Anderson, Everett B.; Jayne, Karen D.; Woodman, Alan S.
2004-01-01
Micro-tubular fuel cells that would operate at power levels on the order of hundreds of watts or less are under development as alternatives to batteries in numerous products - portable power tools, cellular telephones, laptop computers, portable television receivers, and small robotic vehicles, to name a few examples. Micro-tubular fuel cells exploit advances in the art of proton-exchange-membrane fuel cells. The main advantage of the micro-tubular fuel cells over the plate-and-frame fuel cells would be higher power densities: Whereas the mass and volume power densities of low-pressure hydrogen-and-oxygen-fuel plate-and-frame fuel cells designed to operate in the targeted power range are typically less than 0.1 W/g and 0.1 kW/L, micro-tubular fuel cells are expected to reach power densities much greater than 1 W/g and 1 kW/L. Because of their higher power densities, micro-tubular fuel cells would be better for powering portable equipment, and would be better suited to applications in which there are requirements for modularity to simplify maintenance or to facilitate scaling to higher power levels. The development of PEMFCs has conventionally focused on producing large stacks of cells that operate at typical power levels >5 kW. The usual approach taken to developing lower-power PEMFCs for applications like those listed above has been to simply shrink the basic plate-and-frame configuration to smaller dimensions. A conventional plate-and-frame fuel cell contains a membrane/electrode assembly in the form of a flat membrane with electrodes of the same active area bonded to both faces. In order to provide reactants to both electrodes, bipolar plates that contain flow passages are placed on both electrodes. The mass and volume overhead of the bipolar plates amounts to about 75 percent of the total mass and volume of a fuel-cell stack. Removing these bipolar plates in the micro-tubular fuel cell significantly increases the power density.
Gender Discrimination in the Allocation of Migrant Household Resources.
Antman, Francisca M
2015-07-01
This paper considers the relationship between international migration and gender discrimination through the lens of decision-making power over intrahousehold resource allocation. The endogeneity of migration is addressed with a difference-in-differences style identification strategy and a model with household fixed effects. The results suggest that while a migrant household head is away, a greater share of resources is spent on girls relative to boys and his spouse commands greater decision-making power. Once the head returns home, however, a greater share of resources goes to boys and there is suggestive evidence of greater authority for the head of household.
2011-09-01
supply for the IMU switching 5, 12V ATX power supply for the computer and hard drive An L1/L2 active antenna on small back plane USB to serial...switching 5, 12V ATX power supply for the computer and hard drive Figure 4. UAS Target Location Technology for Ground Based Observers (TLGBO...15V power supply for the IMU H. switching 5, 12V ATX power supply for the computer & hard drive I. An L1/L2 active antenna on a small back
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-12-04
The software serves two purposes. The first purpose of the software is to prototype the Sandia High Performance Computing Power Application Programming Interface Specification effort. The specification can be found at http://powerapi.sandia.gov . Prototypes of the specification were developed in parallel with the development of the specification. Release of the prototype will be instructive to anyone who intends to implement the specification. More specifically, our vendor collaborators will benefit from the availability of the prototype. The second is in direct support of the PowerInsight power measurement device, which was co-developed with Penguin Computing. The software provides a cluster wide measurementmore » capability enabled by the PowerInsight device. The software can be used by anyone who purchases a PowerInsight device. The software will allow the user to easily collect power and energy information of a node that is instrumented with PowerInsight. The software can also be used as an example prototype implementation of the High Performance Computing Power Application Programming Interface Specification.« less
Haidar, Azzam; Jagode, Heike; Vaccaro, Phil; ...
2018-03-22
The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haidar, Azzam; Jagode, Heike; Vaccaro, Phil
The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less
Utility of fluorescence microscopy in embryonic/fetal topographical analysis.
Zucker, R M; Elstein, K H; Shuey, D L; Ebron-McCoy, M; Rogers, J M
1995-06-01
For topographical analysis of developing embryos, investigators typically rely on scanning electron microscopy (SEM) to provide the surface detail not attainable with light microscopy. SEM is an expensive and time-consuming technique, however, and the preparation procedure may alter morphology and leave the specimen friable. We report that by using a high-resolution compound epifluorescence microscope with inexpensive low-power objectives and the fluorochrome acridine orange, we were able to obtain surface images of fixed or fresh whole rat embryos and fetal palates of considerably greater topographical detail than those obtained using routine light microscopy. Indeed the resulting high-resolution images afford not only superior qualitative documentation of morphological observations, but the capability for detailed morphometry via digitization and computer-assisted image analysis.
Bio-Inspired Neural Model for Learning Dynamic Models
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu; Suri, Ronald
2009-01-01
A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.
Computer modeling and simulators as part of university training for NPP operating personnel
NASA Astrophysics Data System (ADS)
Volman, M.
2017-01-01
This paper considers aspects of a program for training future nuclear power plant personnel developed by the NPP Department of Ivanovo State Power Engineering University. Computer modeling is used for numerical experiments on the kinetics of nuclear reactors in Mathcad. Simulation modeling is carried out on the computer and full-scale simulator of water-cooled power reactor for the simulation of neutron-physical reactor measurements and the start-up - shutdown process.
Ruthenium Oxide Electrochemical Super Capacitor Optimization for Pulse Power Applications
NASA Technical Reports Server (NTRS)
Merryman, Stephen A.; Chen, Zheng
2000-01-01
Electrical actuator systems are being pursued as alternatives to hydraulic systems to reduce maintenance time, weight and costs while increasing reliability. Additionally, safety and environmental hazards associated with the hydraulic fluids can be eliminated. For most actuation systems, the actuation process is typically pulsed with high peak power requirements but with relatively modest average power levels. The power-time requirements for electrical actuators are characteristic of pulsed power technologies where the source can be sized for the average power levels while providing the capability to achieve the peak requirements. Among the options for the power source are battery systems, capacitor systems or battery-capacitor hybrid systems. Battery technologies are energy dense but deficient in power density; capacitor technologies are power dense but limited by energy density. The battery-capacitor hybrid system uses the battery to supply the average power and the capacitor to meet the peak demands. It has been demonstrated in previous work that the hybrid electrical power source can potentially provide a weight savings of approximately 59% over a battery-only source. Electrochemical capacitors have many properties that make them well-suited for electrical actuator applications. They have the highest demonstrated energy density for capacitive storage (up to 100 J/g), have power densities much greater than most battery technologies (greater than 30kW/kg), are capable of greater than one million charge-discharge cycles, can be charged at extremely high rates, and have non-explosive failure modes. Thus, electrochemical capacitors exhibit a combination of desirable battery and capacitor characteristics.
The Experimental Mathematician: The Pleasure of Discovery and the Role of Proof
ERIC Educational Resources Information Center
Borwein, Jonathan M.
2005-01-01
The emergence of powerful mathematical computing environments, the growing availability of correspondingly powerful (multi-processor) computers and the pervasive presence of the Internet allow for mathematicians, students and teachers, to proceed heuristically and "quasi-inductively." We may increasingly use symbolic and numeric computation,…
ACTN3 genotypes of Rugby Union players: distribution, power output and body composition.
Bell, W; Colley, J P; Evans, W D; Darlington, S E; Cooper, S-M
2012-01-01
To identify the distribution and explore the relationship between ACTN3 genotypes and power and body composition phenotypes. Case control and association studies were employed using a homogeneous group of players (n = 102) and a control group (n = 110). Power-related phenotypes were measured using the counter movement jump (CMJ) and body composition phenotypes by dual-energy X-ray absorptiometry (DXA). Statistics used were Pearson's chi-square, ANCOVA, coefficients of correlation and independent t-tests. Genotyping was carried out using polymerase chain reaction followed by enzymatic Ddel digestion. Genotype proportions of players were compared with controls (p = 0.07). No significant genotype differences occurred between forwards or backs (p = 0.822) or within-forwards (p = 0.882) or within-backs (p = 0.07). Relative force and velocity were significantly larger in backs, power significantly greater in forwards; in body composition, all phenotypes were significantly greater in forwards than backs. Correlations between phenotypes were greater for the RX genotype (p = 0.05-0.01). Relationships between ACTN3 genotypes and power or body composition-related phenotypes were not significant. As fat increased, power-related phenotypes decreased. As body composition increased, power-related phenotypes increased.
High-temperature, high-power-density thermionic energy conversion for space
NASA Technical Reports Server (NTRS)
Morris, J. F.
1977-01-01
Theoretic converter outputs and efficiencies indicate the need to consider thermionic energy conversion (TEC) with greater power densities and higher temperatures within reasonable limits for space missions. Converter-output power density, voltage, and efficiency as functions of current density were determined for 1400-to-2000 K emitters with 725-to-1000 K collectors. The results encourage utilization of TEC with hotter-than-1650 K emitters and greater-than-6W sq cm outputs to attain better efficiencies, greater voltages, and higher waste-heat-rejection temperatures for multihundred-kilowatt space-power applications. For example, 1800 K, 30 A sq cm TEC operation for NEP compared with the 1650 K, 5 A/sq cm case should allow much lower radiation weights, substantially fewer and/or smaller emitter heat pipes, significantly reduced reactor and shield-related weights, many fewer converters and associated current-collecting bus bars, less power conditioning, and lower transmission losses. Integration of these effects should yield considerably reduced NEP specific weights.
Meerwijk, Esther L; Ford, Judith M; Weiss, Sandra J
2015-02-01
Psychological pain is a prominent symptom of clinical depression. We asked if frontal alpha asymmetry, frontal EEG power, and frontal fractal dimension asymmetry predicted psychological pain in adults with a history of depression. Resting-state frontal EEG (F3/F4) was recorded while participants (N=35) sat upright with their eyes closed. Frontal delta power predicted psychological pain while controlling for depressive symptoms, with participants who exhibited less power experiencing greater psychological pain. Frontal fractal dimension asymmetry, a nonlinear measure of complexity, also predicted psychological pain, such that greater left than right complexity was associated with greater psychological pain. Frontal alpha asymmetry did not contribute unique variance to any regression model of psychological pain. As resting-state delta power is associated with the brain's default mode network, results suggest that the default mode network was less activated during high psychological pain. Findings are consistent with a state of arousal associated with psychological pain. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Mason, B. S.; Pearson, T. J.; Readhead, A. C. S.; Shepherd, M. C.; Sievers, J.; Udomprasert, P. S.; Cartwright, J. K.; Farmer, A. J.; Padin, S.; Myers, S. T.;
2002-01-01
We report measurements of anisotropy in the cosmic microwave background radiation over the multipole range l approximately 200 (right arrow) 3500 with the Cosmic Background Imager based on deep observations of three fields. These results confirm the drop in power with increasing l first reported in earlier measurements with this instrument, and extend the observations of this decline in power out to l approximately 2000. The decline in power is consistent with the predicted damping of primary anisotropies. At larger multipoles, l = 2000-3500, the power is 3.1 sigma greater than standard models for intrinsic microwave background anisotropy in this multipole range, and 3.5 sigma greater than zero. This excess power is not consistent with expected levels of residual radio source contamination but, for sigma 8 is approximately greater than 1, is consistent with predicted levels due to a secondary Sunyaev-Zeldovich anisotropy. Further observations are necessary to confirm the level of this excess and, if confirmed, determine its origin.
Wireless live streaming video of laparoscopic surgery: a bandwidth analysis for handheld computers.
Gandsas, Alex; McIntire, Katherine; George, Ivan M; Witzke, Wayne; Hoskins, James D; Park, Adrian
2002-01-01
Over the last six years, streaming media has emerged as a powerful tool for delivering multimedia content over networks. Concurrently, wireless technology has evolved, freeing users from desktop boundaries and wired infrastructures. At the University of Kentucky Medical Center, we have integrated these technologies to develop a system that can wirelessly transmit live surgery from the operating room to a handheld computer. This study establishes the feasibility of using our system to view surgeries and describes the effect of bandwidth on image quality. A live laparoscopic ventral hernia repair was transmitted to a single handheld computer using five encoding speeds at a constant frame rate, and the quality of the resulting streaming images was evaluated. No video images were rendered when video data were encoded at 28.8 kilobytes per second (Kbps), the slowest encoding bitrate studied. The highest quality images were rendered at encoding speeds greater than or equal to 150 Kbps. Of note, a 15 second transmission delay was experienced using all four encoding schemes that rendered video images. We believe that the wireless transmission of streaming video to handheld computers has tremendous potential to enhance surgical education. For medical students and residents, the ability to view live surgeries, lectures, courses and seminars on handheld computers means a larger number of learning opportunities. In addition, we envision that wireless enabled devices may be used to telemonitor surgical procedures. However, bandwidth availability and streaming delay are major issues that must be addressed before wireless telementoring becomes a reality.
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Software Support for Transiently Powered Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Der Woude, Joel Matthew
With the continued reduction in size and cost of computing, power becomes an increasingly heavy burden on system designers for embedded applications. While energy harvesting techniques are an increasingly desirable solution for many deeply embedded applications where size and lifetime are a priority, previous work has shown that energy harvesting provides insufficient power for long running computation. We present Ratchet, which to the authors knowledge is the first automatic, software-only checkpointing system for energy harvesting platforms. We show that Ratchet provides a means to extend computation across power cycles, consistent with those experienced by energy harvesting devices. We demonstrate themore » correctness of our system under frequent failures and show that it has an average overhead of 58.9% across a suite of benchmarks representative for embedded applications.« less
NASA Technical Reports Server (NTRS)
Goltz, G.; Kaiser, L. M.; Weiner, H.
1977-01-01
A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U.S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document establishes the software requirements for the DSPA computer program, discusses the processing that occurs within the program, and defines the necessary interfaces for operation.
NASA Astrophysics Data System (ADS)
Glatter, Otto; Fuchs, Heribert; Jorde, Christian; Eigner, Wolf-Dieter
1987-03-01
The microprocessor of an 8-bit PC system is used as a central control unit for the acquisition and evaluation of data from quasi-elastic light scattering experiments. Data are sampled with a width of 8 bits under control of the CPU. This limits the minimum sample time to 20 μs. Shorter sample times would need a direct memory access channel. The 8-bit CPU can address a 64-kbyte RAM without additional paging. Up to 49 000 sample points can be measured without interruption. After storage, a correlation function or a power spectrum can be calculated from such a primary data set. Furthermore access is provided to the primary data for stability control, statistical tests, and for comparison of different evaluation methods for the same experiment. A detailed analysis of the signal (histogram) and of the effect of overflows is possible and shows that the number of pulses but not the number of overflows determines the error in the result. The correlation function can be computed with reasonable accuracy from data with a mean pulse rate greater than one, the power spectrum needs a three times higher pulse rate for convergence. The statistical accuracy of the results from 49 000 sample points is of the order of a few percent. Additional averages are necessary to improve their quality. The hardware extensions for the PC system are inexpensive. The main disadvantage of the present system is the high minimum sampling time of 20 μs and the fact that the correlogram or the power spectrum cannot be computed on-line as it can be done with hardware correlators or spectrum analyzers. These shortcomings and the storage size restrictions can be removed with a faster 16/32-bit CPU.
Multiscale modeling of mucosal immune responses
2015-01-01
Computational modeling techniques are playing increasingly important roles in advancing a systems-level mechanistic understanding of biological processes. Computer simulations guide and underpin experimental and clinical efforts. This study presents ENteric Immune Simulator (ENISI), a multiscale modeling tool for modeling the mucosal immune responses. ENISI's modeling environment can simulate in silico experiments from molecular signaling pathways to tissue level events such as tissue lesion formation. ENISI's architecture integrates multiple modeling technologies including ABM (agent-based modeling), ODE (ordinary differential equations), SDE (stochastic modeling equations), and PDE (partial differential equations). This paper focuses on the implementation and developmental challenges of ENISI. A multiscale model of mucosal immune responses during colonic inflammation, including CD4+ T cell differentiation and tissue level cell-cell interactions was developed to illustrate the capabilities, power and scope of ENISI MSM. Background Computational techniques are becoming increasingly powerful and modeling tools for biological systems are of greater needs. Biological systems are inherently multiscale, from molecules to tissues and from nano-seconds to a lifespan of several years or decades. ENISI MSM integrates multiple modeling technologies to understand immunological processes from signaling pathways within cells to lesion formation at the tissue level. This paper examines and summarizes the technical details of ENISI, from its initial version to its latest cutting-edge implementation. Implementation Object-oriented programming approach is adopted to develop a suite of tools based on ENISI. Multiple modeling technologies are integrated to visualize tissues, cells as well as proteins; furthermore, performance matching between the scales is addressed. Conclusion We used ENISI MSM for developing predictive multiscale models of the mucosal immune system during gut inflammation. Our modeling predictions dissect the mechanisms by which effector CD4+ T cell responses contribute to tissue damage in the gut mucosa following immune dysregulation. PMID:26329787
Karimi, Davood; Ward, Rabab K
2016-10-01
Image models are central to all image processing tasks. The great advancements in digital image processing would not have been made possible without powerful models which, themselves, have evolved over time. In the past decade, "patch-based" models have emerged as one of the most effective models for natural images. Patch-based methods have outperformed other competing methods in many image processing tasks. These developments have come at a time when greater availability of powerful computational resources and growing concerns over the health risks of the ionizing radiation encourage research on image processing algorithms for computed tomography (CT). The goal of this paper is to explain the principles of patch-based methods and to review some of their recent applications in CT. We first review the central concepts in patch-based image processing and explain some of the state-of-the-art algorithms, with a focus on aspects that are more relevant to CT. Then, we review some of the recent application of patch-based methods in CT. Patch-based methods have already transformed the field of image processing, leading to state-of-the-art results in many applications. More recently, several studies have proposed patch-based algorithms for various image processing tasks in CT, from denoising and restoration to iterative reconstruction. Although these studies have reported good results, the true potential of patch-based methods for CT has not been yet appreciated. Patch-based methods can play a central role in image reconstruction and processing for CT. They have the potential to lead to substantial improvements in the current state of the art.
Multiscale modeling of mucosal immune responses.
Mei, Yongguo; Abedi, Vida; Carbo, Adria; Zhang, Xiaoying; Lu, Pinyi; Philipson, Casandra; Hontecillas, Raquel; Hoops, Stefan; Liles, Nathan; Bassaganya-Riera, Josep
2015-01-01
Computational techniques are becoming increasingly powerful and modeling tools for biological systems are of greater needs. Biological systems are inherently multiscale, from molecules to tissues and from nano-seconds to a lifespan of several years or decades. ENISI MSM integrates multiple modeling technologies to understand immunological processes from signaling pathways within cells to lesion formation at the tissue level. This paper examines and summarizes the technical details of ENISI, from its initial version to its latest cutting-edge implementation. Object-oriented programming approach is adopted to develop a suite of tools based on ENISI. Multiple modeling technologies are integrated to visualize tissues, cells as well as proteins; furthermore, performance matching between the scales is addressed. We used ENISI MSM for developing predictive multiscale models of the mucosal immune system during gut inflammation. Our modeling predictions dissect the mechanisms by which effector CD4+ T cell responses contribute to tissue damage in the gut mucosa following immune dysregulation.Computational modeling techniques are playing increasingly important roles in advancing a systems-level mechanistic understanding of biological processes. Computer simulations guide and underpin experimental and clinical efforts. This study presents ENteric Immune Simulator (ENISI), a multiscale modeling tool for modeling the mucosal immune responses. ENISI's modeling environment can simulate in silico experiments from molecular signaling pathways to tissue level events such as tissue lesion formation. ENISI's architecture integrates multiple modeling technologies including ABM (agent-based modeling), ODE (ordinary differential equations), SDE (stochastic modeling equations), and PDE (partial differential equations). This paper focuses on the implementation and developmental challenges of ENISI. A multiscale model of mucosal immune responses during colonic inflammation, including CD4+ T cell differentiation and tissue level cell-cell interactions was developed to illustrate the capabilities, power and scope of ENISI MSM.
G-STRATEGY: Optimal Selection of Individuals for Sequencing in Genetic Association Studies
Wang, Miaoyan; Jakobsdottir, Johanna; Smith, Albert V.; McPeek, Mary Sara
2017-01-01
In a large-scale genetic association study, the number of phenotyped individuals available for sequencing may, in some cases, be greater than the study’s sequencing budget will allow. In that case, it can be important to prioritize individuals for sequencing in a way that optimizes power for association with the trait. Suppose a cohort of phenotyped individuals is available, with some subset of them possibly already sequenced, and one wants to choose an additional fixed-size subset of individuals to sequence in such a way that the power to detect association is maximized. When the phenotyped sample includes related individuals, power for association can be gained by including partial information, such as phenotype data of ungenotyped relatives, in the analysis, and this should be taken into account when assessing whom to sequence. We propose G-STRATEGY, which uses simulated annealing to choose a subset of individuals for sequencing that maximizes the expected power for association. In simulations, G-STRATEGY performs extremely well for a range of complex disease models and outperforms other strategies with, in many cases, relative power increases of 20–40% over the next best strategy, while maintaining correct type 1 error. G-STRATEGY is computationally feasible even for large datasets and complex pedigrees. We apply G-STRATEGY to data on HDL and LDL from the AGES-Reykjavik and REFINE-Reykjavik studies, in which G-STRATEGY is able to closely-approximate the power of sequencing the full sample by selecting for sequencing a only small subset of the individuals. PMID:27256766
Organization of the secure distributed computing based on multi-agent system
NASA Astrophysics Data System (ADS)
Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera
2018-04-01
Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.
Optimizing Data Centre Energy and Environmental Costs
NASA Astrophysics Data System (ADS)
Aikema, David Hendrik
Data centres use an estimated 2% of US electrical power which accounts for much of their total cost of ownership. This consumption continues to grow, further straining power grids attempting to integrate more renewable energy. This dissertation focuses on assessing and reducing data centre environmental and financial costs. Emissions of projects undertaken to lower the data centre environmental footprints can be assessed and the emission reduction projects compared using an ISO-14064-2-compliant greenhouse gas reduction protocol outlined herein. I was closely involved with the development of the protocol. Full lifecycle analysis and verifying that projects exceed business-as-usual expectations are addressed, and a test project is described. Consuming power when it is low cost or when renewable energy is available can be used to reduce the financial and environmental costs of computing. Adaptation based on the power price showed 10--50% potential savings in typical cases, and local renewable energy use could be increased by 10--80%. Allowing a fraction of high-priority tasks to proceed unimpeded still allows significant savings. Power grid operators use mechanisms called ancillary services to address variation and system failures, paying organizations to alter power consumption on request. By bidding to offer these services, data centres may be able to lower their energy costs while reducing their environmental impact. If providing contingency reserves which require only infrequent action, savings of up to 12% were seen in simulations. Greater power cost savings are possible for those ceding more control to the power grid operator. Coordinating multiple data centres adds overhead, and altering at which data centre requests are processed based on changes in the financial or environmental costs of power is likely to increase this overhead. Tests of virtual machine migrations showed that in some cases there was no visible increase in power use while in others power use rose by 20--30W. Estimates of how migration was likely to impact other services used in current cloud environments were derived.
Spousal Employment and Intra-Household Bargaining Power.
Antman, Francisca M
2014-05-01
This paper considers the relationship between work status and decision-making power of the head of household and his spouse. I use household fixed effects models to address the possibility that spousal work status maybe correlated with unobserved factors that also affect bargaining power within the home. Consistent with the hypothesis that greater economic resources yield greater bargaining power, I find that the spouse of the head of household is more likely to be involved in decisions when she has been employed. Similarly, the head of household is less likely to be the sole decision-maker when his spouse works.
Spousal Employment and Intra-Household Bargaining Power*
Antman, Francisca M.
2014-01-01
This paper considers the relationship between work status and decision-making power of the head of household and his spouse. I use household fixed effects models to address the possibility that spousal work status maybe correlated with unobserved factors that also affect bargaining power within the home. Consistent with the hypothesis that greater economic resources yield greater bargaining power, I find that the spouse of the head of household is more likely to be involved in decisions when she has been employed. Similarly, the head of household is less likely to be the sole decision-maker when his spouse works. PMID:25342928
Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus
2016-05-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.
NASA Technical Reports Server (NTRS)
1974-01-01
The manual for the use of the computer program SYSTID under the Univac operating system is presented. The computer program is used in the simulation and evaluation of the space shuttle orbiter electric power supply. The models described in the handbook are those which were available in the original versions of SYSTID. The subjects discussed are: (1) program description, (2) input language, (3) node typing, (4) problem submission, and (5) basic and power system SYSTID libraries.
NASA Technical Reports Server (NTRS)
Lichtenstein, J. H.
1975-01-01
Power-spectral-density calculations were made of the lateral responses to atmospheric turbulence for several conventional and short take-off and landing (STOL) airplanes. The turbulence was modeled as three orthogonal velocity components, which were uncorrelated, and each was represented with a one-dimensional power spectrum. Power spectral densities were computed for displacements, rates, and accelerations in roll, yaw, and sideslip. In addition, the power spectral density of the transverse acceleration was computed. Evaluation of ride quality based on a specific ride quality criterion was also made. The results show that the STOL airplanes generally had larger values for the rate and acceleration power spectra (and, consequently, larger corresponding root-mean-square values) than the conventional airplanes. The ride quality criterion gave poorer ratings to the STOL airplanes than to the conventional airplanes.
Small Universal Bacteria and Plasmid Computing Systems.
Wang, Xun; Zheng, Pan; Ma, Tongmao; Song, Tao
2018-05-29
Bacterial computing is a known candidate in natural computing, the aim being to construct "bacterial computers" for solving complex problems. In this paper, a new kind of bacterial computing system, named the bacteria and plasmid computing system (BP system), is proposed. We investigate the computational power of BP systems with finite numbers of bacteria and plasmids. Specifically, it is obtained in a constructive way that a BP system with 2 bacteria and 34 plasmids is Turing universal. The results provide a theoretical cornerstone to construct powerful bacterial computers and demonstrate a concept of paradigms using a "reasonable" number of bacteria and plasmids for such devices.
Klonoff, David C
2017-07-01
The Internet of Things (IoT) is generating an immense volume of data. With cloud computing, medical sensor and actuator data can be stored and analyzed remotely by distributed servers. The results can then be delivered via the Internet. The number of devices in IoT includes such wireless diabetes devices as blood glucose monitors, continuous glucose monitors, insulin pens, insulin pumps, and closed-loop systems. The cloud model for data storage and analysis is increasingly unable to process the data avalanche, and processing is being pushed out to the edge of the network closer to where the data-generating devices are. Fog computing and edge computing are two architectures for data handling that can offload data from the cloud, process it nearby the patient, and transmit information machine-to-machine or machine-to-human in milliseconds or seconds. Sensor data can be processed near the sensing and actuating devices with fog computing (with local nodes) and with edge computing (within the sensing devices). Compared to cloud computing, fog computing and edge computing offer five advantages: (1) greater data transmission speed, (2) less dependence on limited bandwidths, (3) greater privacy and security, (4) greater control over data generated in foreign countries where laws may limit use or permit unwanted governmental access, and (5) lower costs because more sensor-derived data are used locally and less data are transmitted remotely. Connected diabetes devices almost all use fog computing or edge computing because diabetes patients require a very rapid response to sensor input and cannot tolerate delays for cloud computing.
Bandeira, Teresa; Negreiro, Filipa; Ferreira, Rosário; Salgueiro, Marisa; Lobo, Luísa; Aguiar, Pedro; Trindade, J C
2011-06-01
Few reports have compared chronic obstructive lung diseases (OLDs) starting in childhood. To describe functional, radiological, and biological features of obliterative bronchiolitis (OB) and further discriminate to problematic severe asthma (PSA) or to diagnose a group with overlapping features. Patients with OB showed a greater degree of obstructive lung defect and higher hyperinflation (P < 0.001). The most frequent high-resolution computed tomography (HRCT) features (increased lung volume, inspiratory decreased attenuation, mosaic pattern, and expiratory air trapping) showed significantly greater scores in OB patients. Patients with PSA have shown a higher frequency of atopy (P < 0.05). ROC curve analysis demonstrated discriminative power for the LF variables, HRCT findings and for atopy between diagnoses. Further analysis released five final variables more accurate for the identification of a third diagnostic group (FVC%t, post-bronchodilator ΔFEV(1) in ml, HRCT mosaic pattern, SPT, and D. pteronyssinus-specific IgE). We found that OB and PSA possess identifiable characteristic features but overlapping values may turn them undistinguishable. Copyright © 2011 Wiley-Liss, Inc.
A morphometric study on the cross-sections of the scapular spine in dogs.
Ocal, M K; Toros, G
2007-01-01
In cases of unstable scapular body fractures, the base of the scapular spine is one of the sites where there is adequate bone for the application of plate fixation in dogs. In this type of fixation, the amount of bone is an important factor with regard to the holding power of the screw from the biomechanical viewpoint. Therefore, the aim of this paper is to present the detailed quantitative features of the sectional area of the scapular spine in dogs. A total of 28 scapulas from 14 dogs were used, and each was divided into 10 equal slices The height of the scapular spine, depths of the supra-spinous and infraspinus fossae were measured from the scanned images with the aid of a computer program. The results showed that the depth of the supraspinous fossa was greater in the ventral half of the spine, while the depth of the infraspinous was greater in the dorsal half. The differences between the depths of the two fossae were noticeable in the ventral half of the scapular spine.
Korte, F Steven; McDonald, Kerry S
2007-01-01
The effects of sarcomere length (SL) on sarcomeric loaded shortening velocity, power output and rates of force development were examined in rat skinned cardiac myocytes that contained either α-myosin heavy chain (α-MyHC) or β-MyHC at 12 ± 1°C. When SL was decreased from 2.3 μm to 2.0 μm submaximal isometric force decreased ∼40% in both α-MyHC and β-MyHC myocytes while peak absolute power output decreased 55% in α-MyHC myocytes and 70% in β-MyHC myocytes. After normalization for the fall in force, peak power output decreased about twice as much in β-MyHC as in α-MyHC myocytes (41%versus 20%). To determine whether the fall in normalized power was due to the lower force levels, [Ca2+] was increased at short SL to match force at long SL. Surprisingly, this led to a 32% greater peak normalized power output at short SL compared to long SL in α-MyHC myocytes, whereas in β-MyHC myocytes peak normalized power output remained depressed at short SL. The role that interfilament spacing plays in determining SL dependence of power was tested by myocyte compression at short SL. Addition of 2% dextran at short SL decreased myocyte width and increased force to levels obtained at long SL, and increased peak normalized power output to values greater than at long SL in both α-MyHC and β-MyHC myocytes. The rate constant of force development (ktr) was also measured and was not different between long and short SL at the same [Ca2+] in α-MyHC myocytes but was greater at short SL in β-MyHC myocytes. At short SL with matched force by either dextran or [Ca2+], ktr was greater than at long SL in both α-MyHC and β-MyHC myocytes. Overall, these results are consistent with the idea that an intrinsic length component increases loaded crossbridge cycling rates at short SL and β-MyHC myocytes exhibit a greater sarcomere length dependence of power output. PMID:17347271
Maximizing Computational Capability with Minimal Power
2009-03-01
Chip -Scale Energy and Power... and Heat Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of...OpticalBench Mounting Posts Imager Chip LCDinterfaced withthecomputer P o l a r i z e r P o l a r i z e r XYZ Translator Optical Slide VMM Computational Pixel...Signal routing power / memory: ? Power does not include comm off chip (i.e. accessing memory) Power = ½ C Vdd2 f for CMOS Chip to Chip (10pF load min
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
... Documents Access and Management System (ADAMS): You may access publicly available documents online in the... Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants,'' issued for... Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission. ACTION: Revision...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software... revised regulatory guide (RG), revision 1 of RG 1.171, ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses American National Standards...
76 FR 40943 - Notice of Issuance of Regulatory Guide
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-12
..., Revision 3, ``Criteria for Use of Computers in Safety Systems of Nuclear Power Plants.'' FOR FURTHER..., ``Criteria for Use of Computers in Safety Systems of Nuclear Power Plants,'' was issued with a temporary... Fuel Reprocessing Plants,'' to 10 CFR part 50 with regard to the use of computers in safety systems of...
Unity Power Factor Operated PFC Converter Based Power Supply for Computers
NASA Astrophysics Data System (ADS)
Singh, Shikha; Singh, Bhim; Bhuvaneswari, G.; Bist, Vashist
2017-11-01
Power Supplies (PSs) employed in personal computers pollute the single phase ac mains by drawing distorted current at a substandard Power Factor (PF). The harmonic distortion of the supply current in these personal computers are observed 75% to 90% with the Crest Factor (CF) being very high which escalates losses in the distribution system. To find a tangible solution to these issues, a non-isolated PFC converter is employed at the input of isolated converter that is capable of improving the input power quality apart from regulating the dc voltage at its output. This is given to the isolated stage that yields completely isolated and stiffly regulated multiple output voltages which is the prime requirement of computer PS. The operation of the proposed PS is evaluated under various operating conditions and the results show improved performance depicting nearly unity PF and low input current harmonics. The prototype of this PS is developed in laboratory environment and test results are recorded which corroborate the power quality improvement observed in simulation results under various operating conditions.
Comparison of brachial artery vasoreactivity in elite power athletes and age-matched controls.
Welsch, Michael A; Blalock, Paul; Credeur, Daniel P; Parish, Tracie R
2013-01-01
Elite endurance athletes typically have larger arteries contributing to greater skeletal muscle blood flow, oxygen and nutrient delivery and improved physical performance. Few studies have examined structural and functional properties of arteries in power athletes. To compare the size and vasoreactivity of the brachial artery of elite power athletes to age-matched controls. It was hypothesized brachial artery diameters of athletes would be larger, have less vasodilation in response to cuff occlusion, but more constriction after a cold pressor test than age-matched controls. Eight elite power athletes (age = 23 ± 2 years) and ten controls (age = 22 ± 1 yrs) were studied. High-resolution ultrasonography was used to assess brachial artery diameters at rest and following 5 minutes of forearm occlusion (Brachial Artery Flow Mediated Dilation = BAFMD) and a cold pressor test (CPT). Basic fitness measures included a handgrip test and 3-minute step test. Brachial arteries of athletes were larger (Athletes 5.39 ± 1.51 vs. 3.73 ± 0.71 mm, p<0.05), had greater vasodilatory (BAFMD%: Athletes: 8.21 ± 1.78 vs. 5.69 ± 1.56%) and constrictor (CPT %: Athletes: -2.95 ± 1.07 vs. -1.20 ± 0.48%) responses, compared to controls. Vascular operating range (VOR = Peak dilation+Peak Constriction) was also greater in athletes (VOR: Athletes: 0.55 ± 0.15 vs. 0.25 ± 0.18 mm, p<0.05). Athletes had superior handgrip strength (Athletes: 55.92 ± 17.06 vs. 36.77 ± 17.06 kg, p<0.05) but similar heart rate responses at peak (Athletes: 123 ± 16 vs. 130 ± 25 bpm, p>0.05) and 1 minute recovery (Athletes: 88 ± 21 vs. 98 ± 26 bpm, p>0.05) following the step test. Elite power athletes have larger brachial arteries, and greater vasoreactivity (greater vasodilatory and constrictor responses) than age-matched controls, contributing to a significantly greater VOR. These data extend the existence of an 'athlete's artery' as previously shown for elite endurance athletes to elite power athletes, and presents a hypothetical explanation for the functional significance of the 'power athlete's artery'.
Bunker, Alex; Magarkar, Aniket; Viitala, Tapani
2016-10-01
Combined experimental and computational studies of lipid membranes and liposomes, with the aim to attain mechanistic understanding, result in a synergy that makes possible the rational design of liposomal drug delivery system (LDS) based therapies. The LDS is the leading form of nanoscale drug delivery platform, an avenue in drug research, known as "nanomedicine", that holds the promise to transcend the current paradigm of drug development that has led to diminishing returns. Unfortunately this field of research has, so far, been far more successful in generating publications than new drug therapies. This partly results from the trial and error based methodologies used. We discuss experimental techniques capable of obtaining mechanistic insight into LDS structure and behavior. Insight obtained purely experimentally is, however, limited; computational modeling using molecular dynamics simulation can provide insight not otherwise available. We review computational research, that makes use of the multiscale modeling paradigm, simulating the phospholipid membrane with all atom resolution and the entire liposome with coarse grained models. We discuss in greater detail the computational modeling of liposome PEGylation. Overall, we wish to convey the power that lies in the combined use of experimental and computational methodologies; we hope to provide a roadmap for the rational design of LDS based therapies. Computational modeling is able to provide mechanistic insight that explains the context of experimental results and can also take the lead and inspire new directions for experimental research into LDS development. This article is part of a Special Issue entitled: Biosimulations edited by Ilpo Vattulainen and Tomasz Róg. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Sen, Syamal K.; AliShaykhian, Gholam
2010-01-01
We present a simple multi-dimensional exhaustive search method to obtain, in a reasonable time, the optimal solution of a nonlinear programming problem. It is more relevant in the present day non-mainframe computing scenario where an estimated 95% computing resources remains unutilized and computing speed touches petaflops. While the processor speed is doubling every 18 months, the band width is doubling every 12 months, and the hard disk space is doubling every 9 months. A randomized search algorithm or, equivalently, an evolutionary search method is often used instead of an exhaustive search algorithm. The reason is that a randomized approach is usually polynomial-time, i.e., fast while an exhaustive search method is exponential-time i.e., slow. We discuss the increasing importance of exhaustive search in optimization with the steady increase of computing power for solving many real-world problems of reasonable size. We also discuss the computational error and complexity of the search algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that the quality of solution of the exhaustive search - a deterministic method - is better than that of randomized search. In 21 st century computing environment, exhaustive search cannot be left aside as an untouchable and it is not always exponential. We also describe a possible application of these algorithms in improving the efficiency of solar cells - a real hot topic - in the current energy crisis. These algorithms could be excellent tools in the hands of experimentalists and could save not only large amount of time needed for experiments but also could validate the theory against experimental results fast.
Gender Discrimination in the Allocation of Migrant Household Resources*
Antman, Francisca M.
2016-01-01
This paper considers the relationship between international migration and gender discrimination through the lens of decision-making power over intrahousehold resource allocation. The endogeneity of migration is addressed with a difference-in-differences style identification strategy and a model with household fixed effects. The results suggest that while a migrant household head is away, a greater share of resources is spent on girls relative to boys and his spouse commands greater decision-making power. Once the head returns home, however, a greater share of resources goes to boys and there is suggestive evidence of greater authority for the head of household. PMID:27546986
Computer optimization of reactor-thermoelectric space power systems
NASA Technical Reports Server (NTRS)
Maag, W. L.; Finnegan, P. M.; Fishbach, L. H.
1973-01-01
A computer simulation and optimization code that has been developed for nuclear space power systems is described. The results of using this code to analyze two reactor-thermoelectric systems are presented.
Alternative Fuels Data Center: Greater Portland Transit District Looks
Forward with Natural GasA> Greater Portland Transit District Looks Forward with Natural Gas to , Maine, powers its transit vehicles with compressed natural gas. For information about this project Related Videos Photo of a car Hydrogen Powers Fuel Cell Vehicles in California Nov. 18, 2017 Photo of a
Reducing cooling energy consumption in data centres and critical facilities
NASA Astrophysics Data System (ADS)
Cross, Gareth
Given the rise of our everyday reliance on computers in all walks of life, from checking the train times to paying our credit card bills online, the need for computational power is ever increasing. Other than the ever-increasing performance of home Personal Computers (PC's) this reliance has given rise to a new phenomenon in the last 10 years ago. The data centre. Data centres contain vast arrays of IT cabinets loaded with servers that perform millions of computational equations every second. It is these data centres that allow us to continue with our reliance on the internet and the PC. As more and more data centres become necessary due to the increase in computing processing power required for the everyday activities we all take for granted so the energy consumed by these data centres rises. Not only are more and more data centres being constructed daily, but operators are also looking at ways to squeeze more processing from their existing data centres. This in turn leads to greater heat outputs and therefore requires more cooling. Cooling data centres requires a sizeable energy input, indeed to many megawatts per data centre site. Given the large amounts of money dependant on the successful operation of data centres, in particular for data centres operated by financial institutions, the onus is predominantly on ensuring the data centres operate with no technical glitches rather than in an energy conscious fashion. This report aims to investigate the ways and means of reducing energy consumption within data centres without compromising the technology the data centres are designed to house. As well as discussing the individual merits of the technologies and their implementation technical calculations will be undertaken where necessary to determine the levels of energy saving, if any, from each proposal. To enable comparison between each proposal any design calculations within this report will be undertaken against a notional data facility. This data facility will nominally be considered to require 1000 kW. Refer to Section 2.1 'Outline of Notional data Facility for Calculation Purposes' for details of the design conditions and constraints of the energy consumption calculations.
Nienow, Tasha; MacDonald, Angus
2017-01-01
Abstract Background: Cognitive deficits contribute to the functional disability associated with schizophrenia. Cognitive training has shown promise as a method of intervention; however, there is considerable variability in the implementation of this approach. The aim of this study was to test the efficacy of a high dose of cognitive training that targeted working memory-related functions. Methods: A randomized, double blind, active placebo-controlled, clinical trial was conducted with 80 outpatients with schizophrenia (mean age 46.44 years, 25% female). Patients were randomized to either working memory-based cognitive training or a computer skills training course that taught computer applications. In both conditions, participants received an average of 3 hours of training weekly for 16 weeks. Cognitive and functional outcomes were assessed with the MATRICS Consensus Cognitive Battery, N-Back performance, 2 measures of functional capacity (UPSA and SSPA) and a measure of community functioning, the Social Functioning Scale. Results: An intent-to-treat analysis found that patients who received cognitive training demonstrated significantly greater change on a trained task (Word N-Back), F(78) = 21.69, P < .0001, and a novel version of a trained task (Picture N-Back) as compared to those in the comparison condition, F(78) = 13.59, P = .002. However, only very modest support was found for generalization of training gains. A trend for an interaction was found on the MCCB Attention Domain score, F(78) = 2.56, P = .12. Participants who received cognitive training demonstrated significantly improved performance, t(39) = 3.79, P = .001, while those in computer skills did not, t(39) = 1.07, P = .37. Conclusion: A well-powered, high-dose, working memory focused, computer-based, cognitive training protocol produced only a small effect in patients with schizophrenia. Results indicate the importance of measuring generalization from training tasks in cognitive remediation studies. Computer-based training was not an effective method of producing change in cognition in patients with schizophrenia.
A cross-sectional lower-body power profile of elite and subelite Australian football players.
Caia, Johnpaul; Doyle, Tim L A; Benson, Amanda C
2013-10-01
Australian football (AF) is a sport which requires a vast array of physiological qualities, including high levels of strength and power. However, the power characteristics of AF players, particularly at the subelite level have not been extensively studied with further investigation warranted to understand the power capabilities and training requirements of elite and subelite AF groups. Therefore, the aim of this investigation was to develop a lower-body power profile of elite and subelite AF players. Eighteen elite and 12 subelite AF players completed a 1 repetition maximum (1RM) squat test to determine maximal lower-body strength, and countermovement jump (CMJ) and squat jump (SJ) testing to assess lower-body muscular power performance. Maximal lower-body strength was not statistically different between groups (p > 0.05). Elite players produced greater levels of peak power for CMJ at loads of 0, 30 (p < 0.05), and 40% (p < 0.01) of 1RM in comparison to subelite players. Squat jump peak power was statistically different between groups at 0, 20, 30, and 40% (p < 0.01) of 1RM; with elite players producing greater power than their subelite counterparts at all measured loads for SJ. Findings from this investigation demonstrate that elite AF players are able to generate greater levels of lower-body power than subelite AF players, despite no significant differences existing in maximal lower-body strength or body mass. As lower-body power levels clearly differentiate elite and subelite AF players, emphasis may be placed on improving the power levels of subelite players, particularly those aspiring to reach the elite level.
NASA Technical Reports Server (NTRS)
Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron
1992-01-01
Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.
Parallel processing for scientific computations
NASA Technical Reports Server (NTRS)
Alkhatib, Hasan S.
1995-01-01
The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.
Unilateral jumps in different directions: a novel assessment of soccer-associated power?
Murtagh, Conall F; Vanrenterghem, Jos; O'Boyle, Andrew; Morgans, Ryland; Drust, Barry; Erskine, Robert M
2017-11-01
We aimed to determine whether countermovement jumps (CMJs; unilateral and bilateral) performed in different directions assessed independent lower-limb power qualities, and if unilateral CMJs would better differentiate between elite and non-elite soccer players than the bilateral vertical (BV) CMJ. Elite (n=23; age, 18.1±1.0years) and non-elite (n=20; age, 22.3±2.7years) soccer players performed three BV, unilateral vertical (UV), unilateral horizontal-forward (UH) and unilateral medial (UM) CMJs. Jump performance (height and projectile range), kinetic and kinematic variables from ground reaction forces, and peak activation levels of the vastus lateralis and biceps femoris (BF) muscles from surface electromyography, were compared between jumps and groups of players. Peak vertical power (V-power) was greater in BV (220.2±30.1W/kg) compared to UV (144.1±16.2W/kg), which was greater than UH (86.7±18.3W/kg) and UM (85.5±13.5W/kg) (all, p<0.05) but there was no difference between UH and UM (p=1.000). Peak BF EMG was greater in UH compared to all other CMJs (p≤0.001). V-power was greater in elite than non-elite for all CMJs (p≤0.032) except for BV (p=0.197). Elite achieved greater UH projectile range than non-elite (51.6±15.4 vs. 40.4±10.4cm, p=0.009). We have shown that UH, UV and UM CMJs assess distinct lower-limb muscular power capabilities in soccer players. Furthermore, as elite players outperformed non-elite players during unilateral but not BV CMJs, unilateral CMJs in different directions should be included in soccer-specific muscular power assessment and talent identification protocols, rather than the BV CMJ. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Situation awareness and trust in computer-based procedures in nuclear power plant operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Throneburg, E. B.; Jones, J. M.
2006-07-01
Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presentedmore » that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)« less
The Ames Power Monitoring System
NASA Technical Reports Server (NTRS)
Osetinsky, Leonid; Wang, David
2003-01-01
The Ames Power Monitoring System (APMS) is a centralized system of power meters, computer hardware, and specialpurpose software that collects and stores electrical power data by various facilities at Ames Research Center (ARC). This system is needed because of the large and varying nature of the overall ARC power demand, which has been observed to range from 20 to 200 MW. Large portions of peak demand can be attributed to only three wind tunnels (60, 180, and 100 MW, respectively). The APMS helps ARC avoid or minimize costly demand charges by enabling wind-tunnel operators, test engineers, and the power manager to monitor total demand for center in real time. These persons receive the information they need to manage and schedule energy-intensive research in advance and to adjust loads in real time to ensure that the overall maximum allowable demand is not exceeded. The APMS (see figure) includes a server computer running the Windows NT operating system and can, in principle, include an unlimited number of power meters and client computers. As configured at the time of reporting the information for this article, the APMS includes more than 40 power meters monitoring all the major research facilities, plus 15 Windows-based client personal computers that display real-time and historical data to users via graphical user interfaces (GUIs). The power meters and client computers communicate with the server using Transmission Control Protocol/Internet Protocol (TCP/IP) on Ethernet networks, variously, through dedicated fiber-optic cables or through the pre-existing ARC local-area network (ARCLAN). The APMS has enabled ARC to achieve significant savings ($1.2 million in 2001) in the cost of power and electric energy by helping personnel to maintain total demand below monthly allowable levels, to manage the overall power factor to avoid low power factor penalties, and to use historical system data to identify opportunities for additional energy savings. The APMS also provides power engineers and electricians with the information they need to plan modifications in advance and perform day-to-day maintenance of the ARC electric-power distribution system.
Saving Energy and Money: A Lesson in Computer Power Management
ERIC Educational Resources Information Center
Lazaros, Edward J.; Hua, David
2012-01-01
In this activity, students will develop an understanding of the economic impact of technology by estimating the cost savings of power management strategies in the classroom. Students will learn how to adjust computer display settings to influence the impact that the computer has on the financial burden to the school. They will use mathematics to…
Computing the Power-Density Spectrum for an Engineering Model
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1982-01-01
Computer program for calculating of power-density spectrum (PDS) from data base generated by Advanced Continuous Simulation Language (ACSL) uses algorithm that employs fast Fourier transform (FFT) to calculate PDS of variable. Accomplished by first estimating autocovariance function of variable and then taking FFT of smoothed autocovariance function to obtain PDS. Fast-Fourier-transform technique conserves computer resources.
47 CFR 74.132 - Power limitations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Power limitations. 74.132 Section 74.132....132 Power limitations. The license for experimental broadcast stations will specify the maximum authorized power. The operating power shall not be greater than necessary to carry on the service and in no...
47 CFR 74.132 - Power limitations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 4 2011-10-01 2011-10-01 false Power limitations. 74.132 Section 74.132....132 Power limitations. The license for experimental broadcast stations will specify the maximum authorized power. The operating power shall not be greater than necessary to carry on the service and in no...
Computer program for afterheat temperature distribution for mobile nuclear power plant
NASA Technical Reports Server (NTRS)
Parker, W. G.; Vanbibber, L. E.
1972-01-01
ESATA computer program was developed to analyze thermal safety aspects of post-impacted mobile nuclear power plants. Program is written in FORTRAN 4 and designed for IBM 7094/7044 direct coupled system.
NASA Astrophysics Data System (ADS)
Aharonov, Dorit
In the last few years, theoretical study of quantum systems serving as computational devices has achieved tremendous progress. We now have strong theoretical evidence that quantum computers, if built, might be used as a dramatically powerful computational tool, capable of performing tasks which seem intractable for classical computers. This review is about to tell the story of theoretical quantum computation. I l out the developing topic of experimental realizations of the model, and neglected other closely related topics which are quantum information and quantum communication. As a result of narrowing the scope of this paper, I hope it has gained the benefit of being an almost self contained introduction to the exciting field of quantum computation. The review begins with background on theoretical computer science, Turing machines and Boolean circuits. In light of these models, I define quantum computers, and discuss the issue of universal quantum gates. Quantum algorithms, including Shor's factorization algorithm and Grover's algorithm for searching databases, are explained. I will devote much attention to understanding what the origins of the quantum computational power are, and what the limits of this power are. Finally, I describe the recent theoretical results which show that quantum computers maintain their complexity power even in the presence of noise, inaccuracies and finite precision. This question cannot be separated from that of quantum complexity because any realistic model will inevitably be subjected to such inaccuracies. I tried to put all results in their context, asking what the implications to other issues in computer science and physics are. In the end of this review, I make these connections explicit by discussing the possible implications of quantum computation on fundamental physical questions such as the transition from quantum to classical physics.
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
Pearson, Simon N; Cronin, John B; Hume, Patria A; Slyfield, David
2009-09-01
Understanding how loading affects power production in resistance training is a key step in identifying the most optimal way of training muscular power - an essential trait in most sporting movements. Twelve elite male sailors with extensive strength-training experience participated in a comparison of kinematics and kinetics from the upper body musculature, with upper body push (bench press) and pull (bench pull) movements performed across loads of 10-100% of one repetition maximum (1RM). 1RM strength and force were shown to be greater in the bench press, while velocity and power outputs were greater for the bench pull across the range of loads. While power output was at a similar level for the two movements at a low load (10% 1RM), significantly greater power outputs were observed for the bench pull in comparison to the bench press with increased load. Power output (Pmax) was maximized at higher relative loads for both mean and peak power in the bench pull (78.6 +/- 5.7% and 70.4 +/- 5.4% of 1RM) compared to the bench press (53.3 +/- 1.7% and 49.7 +/- 4.4% of 1RM). Findings can most likely be attributed to differences in muscle architecture, which may have training implications for these muscles.
1984-12-01
1980’s we are seeing enhancement of breadth, power, and accessibility of computers in many dimensions: o Pov~erfu1, costly fragile mainframes for...During the 1980’s we are seeing enhancement of breadth, power and accessibility of computers in many dimensions. (1) Powerful, costly, fragile mainframes... X A~ ’ EMORANDlUM FOR THE t-RAIRMAN, DEFENSE<. ’ ’...’"" S!B.FECT: Defense Science Board T is F- Supercomputei Applicai io, Yoi are requested to
Asymmetric Base-Bleed Effect on Aerospike Plume-Induced Base-Heating Environment
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Droege, Alan; DAgostino, Mark; Lee, Young-Ching; Williams, Robert
2004-01-01
A computational heat transfer design methodology was developed to study the dual-engine linear aerospike plume-induced base-heating environment during one power-pack out, in ascent flight. It includes a three-dimensional, finite volume, viscous, chemically reacting, and pressure-based computational fluid dynamics formulation, a special base-bleed boundary condition, and a three-dimensional, finite volume, and spectral-line-based weighted-sum-of-gray-gases absorption computational radiation heat transfer formulation. A separate radiation model was used for diagnostic purposes. The computational methodology was systematically benchmarked. In this study, near-base radiative heat fluxes were computed, and they compared well with those measured during static linear aerospike engine tests. The base-heating environment of 18 trajectory points selected from three power-pack out scenarios was computed. The computed asymmetric base-heating physics were analyzed. The power-pack out condition has the most impact on convective base heating when it happens early in flight. The source of its impact comes from the asymmetric and reduced base bleed.
Optimizing the Placement of Burnable Poisons in PWRs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilmaz, Serkan; Ivanov, Kostadin; Levine, Samuel
2005-07-15
The principal focus of this work is on developing a practical tool for designing the minimum amount of burnable poisons (BPs) for a pressurized water reactor using a typical Three Mile Island Unit 1 2-yr cycle as the reference design. The results of this study are to be applied to future reload designs. A new method, the Modified Power Shape Forced Diffusion (MPSFD) method, is presented that initially computes the BP cross section to force the power distribution into a desired shape. The method employs a simple formula that expresses the BP cross section as a function of the differencemore » between the calculated radial power distributions (RPDs) and the limit set for the maximum RPD. This method places BPs into all fresh fuel assemblies (FAs) having an RPD greater than the limit. The MPSFD method then reduces the BP content by reducing the BPs in fresh FAs with the lowest RPDs. Finally, the minimum BP content is attained via a heuristic fine-tuning procedure.This new BP design program has been automated by incorporating the new MPSFD method in conjunction with the heuristic fine-tuning program. The program has automatically produced excellent results for the reference core, and has the potential to reduce fuel costs and save manpower.« less
Papadopoulos, Anthony
2009-01-01
The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hess, Mark Harry; Hutsel, Brian Thomas; Jennings, Christopher Ashley
Recent Magnetized Liner Inertial Fusion experiments at the Sandia National Laboratories Z pulsed power facility have featured a PDV (Photonic Doppler Velocimetry) diagnostic in the final power feed section for measuring load current. In this paper, we report on an anomalous pressure that is detected on this PDV diagnostic very early in time during the current ramp. Early time load currents that are greater than both B-dot upstream current measurements and existing Z machine circuit models by at least 1 MA would be necessary to describe the measured early time velocity of the PDV flyer. This leads us to infermore » that the pressure producing the early time PDV flyer motion cannot be attributed to the magnetic pressure of the load current but rather to an anomalous pressure. Using the MHD code ALEGRA, we are able to compute a time-dependent anomalous pressure function, which when added to the magnetic pressure of the load current, yields simulated flyer velocities that are in excellent agreement with the PDV measurement. As a result, we also provide plausible explanations for what could be the origin of the anomalous pressure.« less
Reionization and the cosmic microwave background in an open universe
NASA Technical Reports Server (NTRS)
Persi, Fred M.
1995-01-01
If the universe was reionized at high reshift (z greater than or approximately equal to 30) or never recombined, then photon-electron scattering can erase fluctuations in the cosmic microwave background at scales less than or approximately equal to 1 deg. Peculiar motion at the surface of last scattering will then have given rise to new anisotropy at the 1 min level through the Vishniac effect. Here the observed fluctuations in galaxy counts are extrapolated to high redshifts using linear theory, and the expected anisotropy is computed. The predicted level of anisotropies is a function of Omega(sub 0) and the ratio of the density in ionized baryons to the critical density and is shown to depend strongly on the large- and small-scale power. It is not possible to make general statements about the viability of all reionized models based on current observations, but it is possible to rule out specific models for structure formation, particularly those with high baryonic content or small-scale power. The induced fluctuations are shown to scale with cosmological parameters and optical depth.
Announcing Supercomputer Summit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wells, Jack; Bland, Buddy; Nichols, Jeff
Summit is the next leap in leadership-class computing systems for open science. With Summit we will be able to address, with greater complexity and higher fidelity, questions concerning who we are, our place on earth, and in our universe. Summit will deliver more than five times the computational performance of Titan’s 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017. Like Titan, Summit will have a hybrid architecture, and each node will contain multiple IBM POWER9 CPUs and NVIDIA Volta GPUs all connected together with NVIDIA’s high-speed NVLink. Each node will have over half a terabyte ofmore » coherent memory (high bandwidth memory + DDR4) addressable by all CPUs and GPUs plus 800GB of non-volatile RAM that can be used as a burst buffer or as extended memory. To provide a high rate of I/O throughput, the nodes will be connected in a non-blocking fat-tree using a dual-rail Mellanox EDR InfiniBand interconnect. Upon completion, Summit will allow researchers in all fields of science unprecedented access to solving some of the world’s most pressing challenges.« less
NASA Astrophysics Data System (ADS)
Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.
2003-09-01
In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.
NASA Astrophysics Data System (ADS)
Ellerman, David
2014-03-01
In models of QM over finite fields (e.g., Schumacher's ``modal quantum theory'' MQT), one finite field stands out, Z2, since Z2 vectors represent sets. QM (finite-dimensional) mathematics can be transported to sets resulting in quantum mechanics over sets or QM/sets. This gives a full probability calculus (unlike MQT with only zero-one modalities) that leads to a fulsome theory of QM/sets including ``logical'' models of the double-slit experiment, Bell's Theorem, QIT, and QC. In QC over Z2 (where gates are non-singular matrices as in MQT), a simple quantum algorithm (one gate plus one function evaluation) solves the Parity SAT problem (finding the parity of the sum of all values of an n-ary Boolean function). Classically, the Parity SAT problem requires 2n function evaluations in contrast to the one function evaluation required in the quantum algorithm. This is quantum speedup but with all the calculations over Z2 just like classical computing. This shows definitively that the source of quantum speedup is not in the greater power of computing over the complex numbers, and confirms the idea that the source is in superposition.
NASA Astrophysics Data System (ADS)
Lindsey, Rebecca; Goldman, Nir; Fried, Laurence
2017-06-01
Atomistic modeling of chemistry at extreme conditions remains a challenge, despite continuing advances in computing resources and simulation tools. While first principles methods provide a powerful predictive tool, the time and length scales associated with chemistry at extreme conditions (ns and μm, respectively) largely preclude extension of such models to molecular dynamics. In this work, we develop a simulation approach that retains the accuracy of density functional theory (DFT) while decreasing computational effort by several orders of magnitude. We generate n-body descriptions for atomic interactions by mapping forces arising from short density functional theory (DFT) trajectories on to simple Chebyshev polynomial series. We examine the importance of including greater than 2-body interactions, model transferability to different state points, and discuss approaches to ensure smooth and reasonable model shape outside of the distance domain sampled by the DFT training set. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Using a cloud to replenish parched groundwater modeling efforts.
Hunt, Randall J; Luchette, Joseph; Schreuder, Willem A; Rumbaugh, James O; Doherty, John; Tonkin, Matthew J; Rumbaugh, Douglas B
2010-01-01
Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate "virtual" computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.
Using a cloud to replenish parched groundwater modeling efforts
Hunt, Randall J.; Luchette, Joseph; Schreuder, Willem A.; Rumbaugh, James O.; Doherty, John; Tonkin, Matthew J.; Rumbaugh, Douglas B.
2010-01-01
Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate “virtual” computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.
LQG/LTR optimal attitude control of small flexible spacecraft using free-free boundary conditions
NASA Astrophysics Data System (ADS)
Fulton, Joseph M.
Due to the volume and power limitations of a small satellite, careful consideration must be taken while designing an attitude control system for 3-axis stabilization. Placing redundancy in the system proves difficult and utilizing power hungry, high accuracy, active actuators is not a viable option. Thus, it is customary to find dependable, passive actuators used in conjunction with small scale active control components. This document describes the application of Elastic Memory Composite materials in the construction of a flexible spacecraft appendage, such as a gravity gradient boom. Assumed modes methods are used with Finite Element Modeling information to obtain the equations of motion for the system while assuming free-free boundary conditions. A discussion is provided to illustrate how cantilever mode shapes are not always the best assumption when modeling small flexible spacecraft. A key point of interest is first resonant modes may be needed in the system design plant in spite of these modes being greater than one order of magnitude in frequency when compared to the crossover frequency of the controller. LQG/LTR optimal control techniques are implemented to compute attitude control gains while controller robustness considerations determine appropriate reduced order controllers and which flexible modes to include in the design model. Key satellite designer concerns in the areas of computer processor sizing, material uncertainty impacts on the system model, and system performance variations resulting from appendage length modifications are addressed.
Staley, James R; Jones, Edmund; Kaptoge, Stephen; Butterworth, Adam S; Sweeting, Michael J; Wood, Angela M; Howson, Joanna M M
2017-06-01
Logistic regression is often used instead of Cox regression to analyse genome-wide association studies (GWAS) of single-nucleotide polymorphisms (SNPs) and disease outcomes with cohort and case-cohort designs, as it is less computationally expensive. Although Cox and logistic regression models have been compared previously in cohort studies, this work does not completely cover the GWAS setting nor extend to the case-cohort study design. Here, we evaluated Cox and logistic regression applied to cohort and case-cohort genetic association studies using simulated data and genetic data from the EPIC-CVD study. In the cohort setting, there was a modest improvement in power to detect SNP-disease associations using Cox regression compared with logistic regression, which increased as the disease incidence increased. In contrast, logistic regression had more power than (Prentice weighted) Cox regression in the case-cohort setting. Logistic regression yielded inflated effect estimates (assuming the hazard ratio is the underlying measure of association) for both study designs, especially for SNPs with greater effect on disease. Given logistic regression is substantially more computationally efficient than Cox regression in both settings, we propose a two-step approach to GWAS in cohort and case-cohort studies. First to analyse all SNPs with logistic regression to identify associated variants below a pre-defined P-value threshold, and second to fit Cox regression (appropriately weighted in case-cohort studies) to those identified SNPs to ensure accurate estimation of association with disease.
Lower-extremity biomechanics during forward and lateral stepping activities in older adults
Wang, Man-Ying; Flanagan, Sean; Song, Joo-Eun; Greendale, Gail A.; Salem, George J.
2012-01-01
Objective To characterize the lower-extremity biomechanics associated with stepping activities in older adults. Design Repeated-measures comparison of kinematics and kinetics associated with forward step-up and lateral step-up activities. Background Biomechanical analysis may be used to assess the effectiveness of various ‘in-home activities’ in targeting appropriate muscle groups and preserving functional strength and power in elders. Methods Data were analyzed from 21 participants (mean 74.7 yr (standard deviation, 4.4 yr)) who performed the forward and lateral step-up activities while instrumented for biomechanical analysis. Motion analysis equipment, inverse dynamics equations, and repeated measures anovas were used to contrast the maximum joint angles, peak net joint moments, angular impulse, work, and power associated with the activities. Results The lateral step-up resulted in greater maximum knee flexion (P < 0.001) and ankle dorsiflexion angles (P < 0.01). Peak joint moments were similar between exercises. The forward step-up generated greater peak hip power (P < 0.05) and total work (P < 0.001); whereas, the lateral step-up generated greater impulse (P < 0.05), work (P < 0.01), and power (P < 0.05) at the knee and ankle. Conclusions In older adults, the forward step-up places greater demand on the hip extensors, while lateral step-up places greater demand on the knee extensors and ankle plantar flexors. PMID:12620784
Biologically inspired collision avoidance system for unmanned vehicles
NASA Astrophysics Data System (ADS)
Ortiz, Fernando E.; Graham, Brett; Spagnoli, Kyle; Kelmelis, Eric J.
2009-05-01
In this project, we collaborate with researchers in the neuroscience department at the University of Delaware to develop an Field Programmable Gate Array (FPGA)-based embedded computer, inspired by the brains of small vertebrates (fish). The mechanisms of object detection and avoidance in fish have been extensively studied by our Delaware collaborators. The midbrain optic tectum is a biological multimodal navigation controller capable of processing input from all senses that convey spatial information, including vision, audition, touch, and lateral-line (water current sensing in fish). Unfortunately, computational complexity makes these models too slow for use in real-time applications. These simulations are run offline on state-of-the-art desktop computers, presenting a gap between the application and the target platform: a low-power embedded device. EM Photonics has expertise in developing of high-performance computers based on commodity platforms such as graphic cards (GPUs) and FPGAs. FPGAs offer (1) high computational power, low power consumption and small footprint (in line with typical autonomous vehicle constraints), and (2) the ability to implement massively-parallel computational architectures, which can be leveraged to closely emulate biological systems. Combining UD's brain modeling algorithms and the power of FPGAs, this computer enables autonomous navigation in complex environments, and further types of onboard neural processing in future applications.
Wei, Yawei; Venayagamoorthy, Ganesh Kumar
2017-09-01
To prevent large interconnected power system from a cascading failure, brownout or even blackout, grid operators require access to faster than real-time information to make appropriate just-in-time control decisions. However, the communication and computational system limitations of currently used supervisory control and data acquisition (SCADA) system can only deliver delayed information. However, the deployment of synchrophasor measurement devices makes it possible to capture and visualize, in near-real-time, grid operational data with extra granularity. In this paper, a cellular computational network (CCN) approach for frequency situational intelligence (FSI) in a power system is presented. The distributed and scalable computing unit of the CCN framework makes it particularly flexible for customization for a particular set of prediction requirements. Two soft-computing algorithms have been implemented in the CCN framework: a cellular generalized neuron network (CCGNN) and a cellular multi-layer perceptron network (CCMLPN), for purposes of providing multi-timescale frequency predictions, ranging from 16.67 ms to 2 s. These two developed CCGNN and CCMLPN systems were then implemented on two different scales of power systems, one of which installed a large photovoltaic plant. A real-time power system simulator at weather station within the Real-Time Power and Intelligent Systems (RTPIS) laboratory at Clemson, SC, was then used to derive typical FSI results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Power combining in an array of microwave power rectifiers
NASA Technical Reports Server (NTRS)
Gutmann, R. J.; Borrego, J. M.
1979-01-01
This work analyzes the resultant efficiency degradation when identical rectifiers operate at different RF power levels as caused by the power beam taper. Both a closed-form analytical circuit model and a detailed computer-simulation model are used to obtain the output dc load line of the rectifier. The efficiency degradation is nearly identical with series and parallel combining, and the closed-form analytical model provides results which are similar to the detailed computer-simulation model.
Automated Measurement of Patient-Specific Tibial Slopes from MRI
Amerinatanzi, Amirhesam; Summers, Rodney K.; Ahmadi, Kaveh; Goel, Vijay K.; Hewett, Timothy E.; Nyman, Edward
2017-01-01
Background: Multi-planar proximal tibial slopes may be associated with increased likelihood of osteoarthritis and anterior cruciate ligament injury, due in part to their role in checking the anterior-posterior stability of the knee. Established methods suffer repeatability limitations and lack computational efficiency for intuitive clinical adoption. The aims of this study were to develop a novel automated approach and to compare the repeatability and computational efficiency of the approach against previously established methods. Methods: Tibial slope geometries were obtained via MRI and measured using an automated Matlab-based approach. Data were compared for repeatability and evaluated for computational efficiency. Results: Mean lateral tibial slope (LTS) for females (7.2°) was greater than for males (1.66°). Mean LTS in the lateral concavity zone was greater for females (7.8° for females, 4.2° for males). Mean medial tibial slope (MTS) for females was greater (9.3° vs. 4.6°). Along the medial concavity zone, female subjects demonstrated greater MTS. Conclusion: The automated method was more repeatable and computationally efficient than previously identified methods and may aid in the clinical assessment of knee injury risk, inform surgical planning, and implant design efforts. PMID:28952547
Mobile high-performance computing (HPC) for synthetic aperture radar signal processing
NASA Astrophysics Data System (ADS)
Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen
2018-04-01
The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.
An Experimental Study of a Pulsed Electromagnetic Plasma Accelerator
NASA Technical Reports Server (NTRS)
Thio, Y. C. Francis; Eskridge, Richard; Lee, Mike; Smith, James; Martin, Adam; Markusic, Tom E.; Cassibry, Jason T.; Rodgers, Stephen L. (Technical Monitor)
2002-01-01
Experiments are being performed on the NASA Marshall Space Flight Center (MSFC) pulsed electromagnetic plasma accelerator (PEPA-0). Data produced from the experiments provide an opportunity to further understand the plasma dynamics in these thrusters via detailed computational modeling. The detailed and accurate understanding of the plasma dynamics in these devices holds the key towards extending their capabilities in a number of applications, including their applications as high power (greater than 1 MW) thrusters, and their use for producing high-velocity, uniform plasma jets for experimental purposes. For this study, the 2-D MHD modeling code, MACH2, is used to provide detailed interpretation of the experimental data. At the same time, a 0-D physics model of the plasma initial phase is developed to guide our 2-D modeling studies.
2-D Magnetohydrodynamic Modeling of A Pulsed Plasma Thruster
NASA Technical Reports Server (NTRS)
Thio, Y. C. Francis; Cassibry, J. T.; Wu, S. T.; Rodgers, Stephen L. (Technical Monitor)
2002-01-01
Experiments are being performed on the NASA Marshall Space Flight Center (MSFC) MK-1 pulsed plasma thruster. Data produced from the experiments provide an opportunity to further understand the plasma dynamics in these thrusters via detailed computational modeling. The detailed and accurate understanding of the plasma dynamics in these devices holds the key towards extending their capabilities in a number of applications, including their applications as high power (greater than 1 MW) thrusters, and their use for producing high-velocity, uniform plasma jets for experimental purposes. For this study, the 2-D MHD modeling code, MACH2, is used to provide detailed interpretation of the experimental data. At the same time, a 0-D physics model of the plasma initial phase is developed to guide our 2-D modeling studies.
Strange attractors in weakly turbulent Couette-Taylor flow
NASA Technical Reports Server (NTRS)
Brandstater, A.; Swinney, Harry L.
1987-01-01
An experiment is conducted on the transition from quasi-periodic to weakly turbulent flow of a fluid contained between concentric cylinders with the inner cylinder rotating and the outer cylinder at rest. Power spectra, phase-space portraits, and circle maps obtained from velocity time-series data indicate that the nonperiodic behavior observed is deterministic, that is, it is described by strange attractors. Various problems that arise in computing the dimension of strange attractors constructed from experimental data are discussed and it is shown that these problems impose severe requirements on the quantity and accuracy of data necessary for determining dimensions greater than about 5. In the present experiment the attractor dimension increases from 2 at the onset of turbulence to about 4 at a Reynolds number 50-percent above the onset of turbulence.
Details of insect wing design and deformation enhance aerodynamic function and flight efficiency.
Young, John; Walker, Simon M; Bomphrey, Richard J; Taylor, Graham K; Thomas, Adrian L R
2009-09-18
Insect wings are complex structures that deform dramatically in flight. We analyzed the aerodynamic consequences of wing deformation in locusts using a three-dimensional computational fluid dynamics simulation based on detailed wing kinematics. We validated the simulation against smoke visualizations and digital particle image velocimetry on real locusts. We then used the validated model to explore the effects of wing topography and deformation, first by removing camber while keeping the same time-varying twist distribution, and second by removing camber and spanwise twist. The full-fidelity model achieved greater power economy than the uncambered model, which performed better than the untwisted model, showing that the details of insect wing topography and deformation are important aerodynamically. Such details are likely to be important in engineering applications of flapping flight.
NASA Technical Reports Server (NTRS)
Geiselhart, Karl A.; Ozoroski, Lori P.; Fenbert, James W.; Shields, Elwood W.; Li, Wu
2011-01-01
This paper documents the development of a conceptual level integrated process for design and analysis of efficient and environmentally acceptable supersonic aircraft. To overcome the technical challenges to achieve this goal, a conceptual design capability which provides users with the ability to examine the integrated solution between all disciplines and facilitates the application of multidiscipline design, analysis, and optimization on a scale greater than previously achieved, is needed. The described capability is both an interactive design environment as well as a high powered optimization system with a unique blend of low, mixed and high-fidelity engineering tools combined together in the software integration framework, ModelCenter. The various modules are described and capabilities of the system are demonstrated. The current limitations and proposed future enhancements are also discussed.
Meeting design challenges of ultralow-power system-on-chip technology.
Morris, Steve
2004-11-01
New-generation battery-powered products are required to provide increasingly greater performance. This article examines technology solutions and design techniques that can be employed to achieve ultralow-power medical devices.
Phosphoric acid fuel cell power plant system performance model and computer program
NASA Technical Reports Server (NTRS)
Alkasab, K. A.; Lu, C. Y.
1984-01-01
A FORTRAN computer program was developed for analyzing the performance of phosphoric acid fuel cell power plant systems. Energy mass and electrochemical analysis in the reformer, the shaft converters, the heat exchangers, and the fuel cell stack were combined to develop a mathematical model for the power plant for both atmospheric and pressurized conditions, and for several commercial fuels.
Lower extremity joint kinetics and energetics during backward running.
DeVita, P; Stribling, J
1991-05-01
The purpose of this study was to measure lower extremity joint moments of force and joint muscle powers used to perform backward running. Ten trials of high speed (100 Hz) sagittal plane film records and ground reaction force data (1000 Hz) describing backward running were obtained from each of five male runners. Fifteen trials of forward running data were obtained from one of these subjects. Inverse dynamics were performed on these data to obtain the joint moments and powers, which were normalized to body mass to make between-subject comparisons. Backward running hip moment and power patterns were similar in magnitude and opposite in direction to forward running curves and produced more positive work in stance. Functional roles of knee and ankle muscles were interchanged between backward and forward running. Knee extensors were the primary source of propulsion in backward running owing to greater moment and power output (peak moment = 3.60 N.m.kg-1; peak power = 12.40 W.kg-1) compared with the ankle (peak moment = 1.92 N.m.kg-1; peak power = 7.05 W.kg-1). The ankle plantarflexors were the primary shock absorbers, producing the greatest negative power (peak = -6.77 W.kg-1) during early stance. Forward running had greater ankle moment and power output for propulsion and greater knee negative power for impact attenuation. The large knee moment in backward running supported previous findings indicating that backward running training leads to increased knee extensor torque capabilities.
What Can You Learn from a Cell Phone? Almost Anything!
ERIC Educational Resources Information Center
Prensky, Marc
2005-01-01
Today's high-end cell phones have the computing power of a mid-1990s personal computer (PC)--while consuming only one one-hundredth of the energy. Even the simplest, voice-only phones have more complex and powerful chips than the 1969 on-board computer that landed a spaceship on the moon. In the United States, it is almost universally acknowledged…
Ambiguity resolution for satellite Doppler positioning systems
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Marini, J. W.
1977-01-01
A test for ambiguity resolution was derived which was the most powerful in the sense that it maximized the probability of a correct decision. When systematic error sources were properly included in the least squares reduction process to yield an optimal solution, the test reduced to choosing the solution which provided the smaller valuation of the least squares loss function. When systematic error sources were ignored in the least squares reduction, the most powerful test was a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudo-inverse of a reduced rank square matrix. A formula is presented for computing the power of the most powerful test. A numerical example is included in which the power of the test is computed for a situation which may occur during an actual satellite aided search and rescue mission.
NASA Astrophysics Data System (ADS)
Cervone, G.; Clemente-Harding, L.; Alessandrini, S.; Delle Monache, L.
2016-12-01
A methodology based on Artificial Neural Networks (ANN) and an Analog Ensemble (AnEn) is presented to generate 72-hour deterministic and probabilistic forecasts of power generated by photovoltaic (PV) power plants using input from a numerical weather prediction model and computed astronomical variables. ANN and AnEn are used individually and in combination to generate forecasts for three solar power plant located in Italy. The computational scalability of the proposed solution is tested using synthetic data simulating 4,450 PV power stations. The NCAR Yellowstone supercomputer is employed to test the parallel implementation of the proposed solution, ranging from 1 node (32 cores) to 4,450 nodes (141,140 cores). Results show that a combined AnEn + ANN solution yields best results, and that the proposed solution is well suited for massive scale computation.
Future computing platforms for science in a power constrained era
Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; ...
2015-12-23
Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. In conclusion, we evaluate the potentialmore » for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).« less
Statistical power as a function of Cronbach alpha of instrument questionnaire items.
Heo, Moonseong; Kim, Namhee; Faith, Myles S
2015-10-14
In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C(α), can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C(α) have been lacking for various study designs. We formulate a statistical model for parallel items to derive power functions as a function of C(α) under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C(α) is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. It is shown that C(α) is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C(α). Closed-form power functions and samples size determination formulas are derived in terms of C(α), for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C(α), regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C(α), or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.
DETAIL VIEW OF THE POWER CONNECTIONS (FRONT) AND COMPUTER PANELS ...
DETAIL VIEW OF THE POWER CONNECTIONS (FRONT) AND COMPUTER PANELS (REAR), ROOM 8A - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL
Preliminary Analysis: Am-241 RHU/TEG Electric Power Source for Nanosatellites
NASA Technical Reports Server (NTRS)
Robertson, Glen A.; Young, David; Cunningham, Karen; Kim, Tony; Ambrosi, Richard M.; Williams, Hugo R.
2014-01-01
The Februay 2013 Space Works Commercial report indicates a strong increase in nano/microsatellite (1-50 kg) launch demand globally in future years. Nanosatellites (NanoSats) are small spacecraft in the 1-10 kg range, which present a simple, low-cost option for developing quickly-deployable satellites. CubeSats, a special category of NanoSats, are even being considered for interplanetary missions. However, the small dimensions of CubeSats and the limited mass of the NanoSat class in general place limits of capability on their electrical power systems (especially where typical power sources such as solar panels are considered) and stored energy reserves; restricting the power budget and overall functionality. For example, leveraging NanoSat clusters for computationally intensive problems that are solved collectively becomes more challenging with power related restrictions on communication and data-processing. Further, interplanetary missions that would take NanoSats far from the sun, make the use of solar panels less effective as a power source as their required area would become quite large. To overcome these limitations, americium 241 (Am-241) has been suggested as a low power source option. The Idaho National Laboratory, Center for Space Nuclear Research reports that: ? (Production) requires small quantities of isotope - 62.5 g of Pu-238; 250 g Am- 241 (for 5 We); Am-241 is available at around 1 kg/yr commercially; Am-241 produces 59 kev gammas which are stopped readily by tungsten so the radiation field is very low. Whereby, an Am-241 source could be placed in among the instruments and the waste heat used to heat the platform; and ? amounts of isotope are so low that launch approval may be easier, especially with tungsten encapsulation. As further reported, Am-241 has a half-life that is approximately five times greater than that of Pu- 238 and it has been determined that the neutron yield of a 241-AmO(sub 2) source is approximately an order of magnitude lower than that of a 238-PuO(sub 2) source of equal mass and degree of (sup 16)O enrichment. Also it has been demonstrated that shielded heat sources fuelled by oxygen-enriched 238-PuO(sub 2) have masses that are up to 10 times greater than those fuelled by oxygenenriched 241-AmO(sub 2) with equivalent thermal power outputs and neutron dose rates at 1 m radii. For these reasons, Am-241 is well suited to missions that demand long duration electrical power output, such as deep spaceflight missions and similar missions that use radiation-hard electronics and instrumentation that are less susceptible to neutron radiation damage.
Lansberg, Maarten G; Bhat, Ninad S; Yeatts, Sharon D; Palesch, Yuko Y; Broderick, Joseph P; Albers, Gregory W; Lai, Tze L; Lavori, Philip W
2016-12-01
Adaptive trial designs that allow enrichment of the study population through subgroup selection can increase the chance of a positive trial when there is a differential treatment effect among patient subgroups. The goal of this study is to illustrate the potential benefit of adaptive subgroup selection in endovascular stroke studies. We simulated the performance of a trial design with adaptive subgroup selection and compared it with that of a traditional design. Outcome data were based on 90-day modified Rankin Scale scores, observed in IMS III (Interventional Management of Stroke III), among patients with a vessel occlusion on baseline computed tomographic angiography (n=382). Patients were categorized based on 2 methods: (1) according to location of the arterial occlusive lesion and onset-to-randomization time and (2) according to onset-to-randomization time alone. The power to demonstrate a treatment benefit was based on 10 000 trial simulations for each design. The treatment effect was relatively homogeneous across categories when patients were categorized based on arterial occlusive lesion and time. Consequently, the adaptive design had similar power (47%) compared with the fixed trial design (45%). There was a differential treatment effect when patients were categorized based on time alone, resulting in greater power with the adaptive design (82%) than with the fixed design (57%). These simulations, based on real-world patient data, indicate that adaptive subgroup selection has merit in endovascular stroke trials as it substantially increases power when the treatment effect differs among subgroups in a predicted pattern. © 2016 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Cao, Zhenwei
Over the years, people have found Quantum Mechanics to be extremely useful in explaining various physical phenomena from a microscopic point of view. Anderson localization, named after physicist P. W. Anderson, states that disorder in a crystal can cause non-spreading of wave packets, which is one possible mechanism (at single electron level) to explain metal-insulator transitions. The theory of quantum computation promises to bring greater computational power over classical computers by making use of some special features of Quantum Mechanics. The first part of this dissertation considers a 3D alloy-type model, where the Hamiltonian is the sum of the finite difference Laplacian corresponding to free motion of an electron and a random potential generated by a sign-indefinite single-site potential. The result shows that localization occurs in the weak disorder regime, i.e., when the coupling parameter lambda is very small, for energies E ≤ --Clambda 2. The second part of this dissertation considers adiabatic quantum computing (AQC) algorithms for the unstructured search problem to the case when the number of marked items is unknown. In an ideal situation, an explicit quantum algorithm together with a counting subroutine are given that achieve the optimal Grover speedup over classical algorithms, i.e., roughly speaking, reduce O(2n) to O(2n/2), where n is the size of the problem. However, if one considers more realistic settings, the result shows this quantum speedup is achievable only under a very rigid control precision requirement (e.g., exponentially small control error).
Lewandowski, B. E.; Kilgore, K. L.; Gustafson, K. J.
2010-01-01
An implantable, stimulated-muscle-powered piezoelectric active energy harvesting generator was previously designed to exploit the fact that the mechanical output power of muscle is substantially greater than the electrical power necessary to stimulate the muscle’s motor nerve. We reduced to practice the concept by building a prototype generator and stimulator. We demonstrated its feasibility in vivo, using rabbit quadriceps to drive the generator. The generated power was sufficient for self-sustaining operation of the stimulator and additional harnessed power was dissipated through a load resistor. The prototype generator was developed and the power generating capabilities were tested with a mechanical muscle analog. In vivo generated power matched the mechanical muscle analog, verifying its usefulness as a test-bed for generator development. Generator output power was dependent on the muscle stimulation parameters. Simulations and in vivo testing demonstrated that for a fixed number of stimuli/minute, two stimuli applied at a high frequency generated greater power than single stimuli or tetanic contractions. Larger muscles and circuitry improvements are expected to increase available power. An implanted, self-replenishing power source has the potential to augment implanted battery or transcutaneously powered electronic medical devices. PMID:19657742
2006-09-01
required directional control for each thruster due to their high precision and equivalent power and computer interface requirements to those for the...Universal Serial Bus) ports, LPT (Line Printing Terminal) and KVM (Keyboard-Video- Mouse) interfaces. Additionally, power is supplied to the computer through...of the IDE cable to the Prometheus Development Kit ACC-IDEEXT. Connect a small drive power connector from the desktop ATX power supply to the ACC
Monitoring system including an electronic sensor platform and an interrogation transceiver
Kinzel, Robert L.; Sheets, Larry R.
2003-09-23
A wireless monitoring system suitable for a wide range of remote data collection applications. The system includes at least one Electronic Sensor Platform (ESP), an Interrogator Transceiver (IT) and a general purpose host computer. The ESP functions as a remote data collector from a number of digital and analog sensors located therein. The host computer provides for data logging, testing, demonstration, installation checkout, and troubleshooting of the system. The IT transmits signals from one or more ESP's to the host computer to the ESP's. The IT host computer may be powered by a common power supply, and each ESP is individually powered by a battery. This monitoring system has an extremely low power consumption which allows remote operation of the ESP for long periods; provides authenticated message traffic over a wireless network; utilizes state-of-health and tamper sensors to ensure that the ESP is secure and undamaged; has robust housing of the ESP suitable for use in radiation environments; and is low in cost. With one base station (host computer and interrogator transceiver), multiple ESP's may be controlled at a single monitoring site.
Power subsystem performance prediction /PSPP/ computer program.
NASA Technical Reports Server (NTRS)
Weiner, H.; Weinstein, S.
1972-01-01
A computer program which simulates the operation of the Viking Orbiter Power Subsystem has been developed. The program simulates the characteristics and interactions of a solar array, battery, battery charge controls, zener diodes, power conditioning equipment, and the battery spacecraft and zener diode-spacecraft thermal interfaces. This program has been used to examine the operation of the Orbiter power subsystem during critical phases of the Viking mission - from launch, through midcourse maneuvers, Mars orbital insertion, orbital trims, Lander separation, solar occultations and unattended operation - until the end of the mission. A typical computer run for the first 24 hours after launch is presented which shows the variations in solar array, zener diode, battery charger, batteries and user load characteristics during this period.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, Felicia Angelica; Waymire, Russell L.
2013-10-01
Sandia National Laboratories (SNL) is providing training and consultation activities on security planning and design for the Korea Hydro and Nuclear Power Central Research Institute (KHNPCRI). As part of this effort, SNL performed a literature review on computer security requirements, guidance and best practices that are applicable to an advanced nuclear power plant. This report documents the review of reports generated by SNL and other organizations [U.S. Nuclear Regulatory Commission, Nuclear Energy Institute, and International Atomic Energy Agency] related to protection of information technology resources, primarily digital controls and computer resources and their data networks. Copies of the key documentsmore » have also been provided to KHNP-CRI.« less
Dynamic Computation Offloading for Low-Power Wearable Health Monitoring Systems.
Kalantarian, Haik; Sideris, Costas; Mortazavi, Bobak; Alshurafa, Nabil; Sarrafzadeh, Majid
2017-03-01
The objective of this paper is to describe and evaluate an algorithm to reduce power usage and increase battery lifetime for wearable health-monitoring devices. We describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data processing between the wearable device and mobile application as a function of desired classification accuracy. By making the correct offloading decision based on current system parameters, we show that we are able to reduce system power by as much as 20%. We demonstrate that computation offloading can be applied to real-time monitoring systems, and yields significant power savings. Making correct offloading decisions for health monitoring devices can extend battery life and improve adherence.
Randolph, Susan A
2017-07-01
With the increased use of electronic devices with visual displays, computer vision syndrome is becoming a major public health issue. Improving the visual status of workers using computers results in greater productivity in the workplace and improved visual comfort.
Harmonic analysis of spacecraft power systems using a personal computer
NASA Technical Reports Server (NTRS)
Williamson, Frank; Sheble, Gerald B.
1989-01-01
The effects that nonlinear devices such as ac/dc converters, HVDC transmission links, and motor drives have on spacecraft power systems are discussed. The nonsinusoidal currents, along with the corresponding voltages, are calculated by a harmonic power flow which decouples and solves for each harmonic component individually using an iterative Newton-Raphson algorithm. The sparsity of the harmonic equations and the overall Jacobian matrix is used to an advantage in terms of saving computer memory space and in terms of reducing computation time. The algorithm could also be modified to analyze each harmonic separately instead of all at the same time.
Experiments on H2-O2MHD power generation
NASA Technical Reports Server (NTRS)
Smith, J. M.
1980-01-01
Magnetohydrodynamic power generation experiments utilizing a cesium-seeded H2-O2 working fluid were carried out using a diverging area Hall duct having an entrance Mach number of 2. The experiments were conducted in a high-field strength cryomagnet facility at field strengths up to 5 tesla. The effects of power takeoff location, axial duct location within the magnetic field, generator loading, B-field strength, and electrode breakdown voltage were investigated. For the operating conditions of these experiments, it is found that the power output increases with the square of the B-field and can be limited by choking of the channel or interelectrode voltage breakdown which occurs at Hall fields greater than 50 volts/insulator. Peak power densities of greater than 100 MW/cu M were achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpov, A. S.
2013-01-15
A computer procedure for simulating magnetization-controlled dc shunt reactors is described, which enables the electromagnetic transients in electric power systems to be calculated. It is shown that, by taking technically simple measures in the control system, one can obtain high-speed reactors sufficient for many purposes, and dispense with the use of high-power devices for compensating higher harmonic components.
Maestripieri, Dario; Klimczuk, Amanda C. E.; Traficonte, Daniel M.; Wilson, M. Claire
2014-01-01
Facial attractiveness represents an important component of an individual’s overall attractiveness as a potential mating partner. Perceptions of facial attractiveness are expected to vary with age-related changes in health, reproductive value, and power. In this study, we investigated perceptions of facial attractiveness, power, and personality in two groups of women of pre- and post-menopausal ages (35–50 years and 51–65 years, respectively) and two corresponding groups of men. We tested three hypotheses: (1) that perceived facial attractiveness would be lower for older than for younger men and women; (2) that the age-related reduction in facial attractiveness would be greater for women than for men; and (3) that for men, there would be a larger increase in perceived power at older ages. Eighty facial stimuli were rated by 60 (30 male, 30 female) middle-aged women and men using online surveys. Our three main hypotheses were supported by the data. Consistent with sex differences in mating strategies, the greater age-related decline in female facial attractiveness was driven by male respondents, while the greater age-related increase in male perceived power was driven by female respondents. In addition, we found evidence that some personality ratings were correlated with perceived attractiveness and power ratings. The results of this study are consistent with evolutionary theory and with previous research showing that faces can provide important information about characteristics that men and women value in a potential mating partner such as their health, reproductive value, and power or possession of resources. PMID:24592253
Maestripieri, Dario; Klimczuk, Amanda C E; Traficonte, Daniel M; Wilson, M Claire
2014-01-01
Facial attractiveness represents an important component of an individual's overall attractiveness as a potential mating partner. Perceptions of facial attractiveness are expected to vary with age-related changes in health, reproductive value, and power. In this study, we investigated perceptions of facial attractiveness, power, and personality in two groups of women of pre- and post-menopausal ages (35-50 years and 51-65 years, respectively) and two corresponding groups of men. We tested three hypotheses: (1) that perceived facial attractiveness would be lower for older than for younger men and women; (2) that the age-related reduction in facial attractiveness would be greater for women than for men; and (3) that for men, there would be a larger increase in perceived power at older ages. Eighty facial stimuli were rated by 60 (30 male, 30 female) middle-aged women and men using online surveys. Our three main hypotheses were supported by the data. Consistent with sex differences in mating strategies, the greater age-related decline in female facial attractiveness was driven by male respondents, while the greater age-related increase in male perceived power was driven by female respondents. In addition, we found evidence that some personality ratings were correlated with perceived attractiveness and power ratings. The results of this study are consistent with evolutionary theory and with previous research showing that faces can provide important information about characteristics that men and women value in a potential mating partner such as their health, reproductive value, and power or possession of resources.
A dc model for power switching transistors suitable for computer-aided design and analysis
NASA Technical Reports Server (NTRS)
Wilson, P. M.; George, R. T., Jr.; Owen, H. A.; Wilson, T. G.
1979-01-01
A model for bipolar junction power switching transistors whose parameters can be readily obtained by the circuit design engineer, and which can be conveniently incorporated into standard computer-based circuit analysis programs is presented. This formulation results from measurements which may be made with standard laboratory equipment. Measurement procedures, as well as a comparison between actual and computed results, are presented.
NASA Technical Reports Server (NTRS)
Hanley, G.
1979-01-01
Computer assisted design of a gallium arsenide solid state dc-to-RF converter with supportive fabrication data was investigated. Specific tasks performed include: computer program checkout; amplifier comparisons; computer design analysis of GaSa solar cells; and GaAs diode evaluation. Results obtained in the design and evaluation of transistors for the microwave space power system are presented.
Prediction and characterization of application power use in a high-performance computing environment
Bugbee, Bruce; Phillips, Caleb; Egan, Hilary; ...
2017-02-27
Power use in data centers and high-performance computing (HPC) facilities has grown in tandem with increases in the size and number of these facilities. Substantial innovation is needed to enable meaningful reduction in energy footprints in leadership-class HPC systems. In this paper, we focus on characterizing and investigating application-level power usage. We demonstrate potential methods for predicting power usage based on a priori and in situ characteristics. Lastly, we highlight a potential use case of this method through a simulated power-aware scheduler using historical jobs from a real scientific HPC system.
Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...
Power and Performance Trade-offs for Space Time Adaptive Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino
Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementationmore » on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.« less
Low-power, transparent optical network interface for high bandwidth off-chip interconnects.
Liboiron-Ladouceur, Odile; Wang, Howard; Garg, Ajay S; Bergman, Keren
2009-04-13
The recent emergence of multicore architectures and chip multiprocessors (CMPs) has accelerated the bandwidth requirements in high-performance processors for both on-chip and off-chip interconnects. For next generation computing clusters, the delivery of scalable power efficient off-chip communications to each compute node has emerged as a key bottleneck to realizing the full computational performance of these systems. The power dissipation is dominated by the off-chip interface and the necessity to drive high-speed signals over long distances. We present a scalable photonic network interface approach that fully exploits the bandwidth capacity offered by optical interconnects while offering significant power savings over traditional E/O and O/E approaches. The power-efficient interface optically aggregates electronic serial data streams into a multiple WDM channel packet structure at time-of-flight latencies. We demonstrate a scalable optical network interface with 70% improvement in power efficiency for a complete end-to-end PCI Express data transfer.
Gammaitoni, Luca; Chiuchiú, D; Madami, M; Carlotti, G
2015-06-05
Is it possible to operate a computing device with zero energy expenditure? This question, once considered just an academic dilemma, has recently become strategic for the future of information and communication technology. In fact, in the last forty years the semiconductor industry has been driven by its ability to scale down the size of the complementary metal-oxide semiconductor-field-effect transistor, the building block of present computing devices, and to increase computing capability density up to a point where the power dissipated in heat during computation has become a serious limitation. To overcome such a limitation, since 2004 the Nanoelectronics Research Initiative has launched a grand challenge to address the fundamental limits of the physics of switches. In Europe, the European Commission has recently funded a set of projects with the aim of minimizing the energy consumption of computing. In this article we briefly review state-of-the-art zero-power computing, with special attention paid to the aspects of energy dissipation at the micro- and nanoscales.
NASA Astrophysics Data System (ADS)
Gammaitoni, Luca; Chiuchiú, D.; Madami, M.; Carlotti, G.
2015-06-01
Is it possible to operate a computing device with zero energy expenditure? This question, once considered just an academic dilemma, has recently become strategic for the future of information and communication technology. In fact, in the last forty years the semiconductor industry has been driven by its ability to scale down the size of the complementary metal-oxide semiconductor-field-effect transistor, the building block of present computing devices, and to increase computing capability density up to a point where the power dissipated in heat during computation has become a serious limitation. To overcome such a limitation, since 2004 the Nanoelectronics Research Initiative has launched a grand challenge to address the fundamental limits of the physics of switches. In Europe, the European Commission has recently funded a set of projects with the aim of minimizing the energy consumption of computing. In this article we briefly review state-of-the-art zero-power computing, with special attention paid to the aspects of energy dissipation at the micro- and nanoscales.
MicroSensors Systems: detection of a dismounted threat
NASA Astrophysics Data System (ADS)
Davis, Bill; Berglund, Victor; Falkofske, Dwight; Krantz, Brian
2005-05-01
The Micro Sensor System (MSS) is a layered sensor network with the goal of detecting dismounted threats approaching high value assets. A low power unattended ground sensor network is dependant on a network protocol for efficiency in order to minimize data transmissions after network establishment. The reduction of network 'chattiness' is a primary driver for minimizing power consumption and is a factor in establishing a low probability of detection and interception. The MSS has developed a unique protocol to meet these challenges. Unattended ground sensor systems are most likely dependant on batteries for power which due to size determines the ability of the sensor to be concealed after placement. To minimize power requirements, overcome size limitations, and maintain a low system cost the MSS utilizes advanced manufacturing processes know as Fluidic Self-Assembly and Chip Scale Packaging. The type of sensing element and the ability to sense various phenomenologies (particularly magnetic) at ranges greater than a few meters limits the effectiveness of a system. The MicroSensor System will overcome these limitations by deploying large numbers of low cost sensors, which is made possible by the advanced manufacturing process used in production of the sensors. The MSS program will provide unprecedented levels of real-time battlefield information which greatly enhances combat situational awareness when integrated with the existing Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) infrastructure. This system will provide an important boost to realizing the information dominant, network-centric objective of Joint Vision 2020.
Tracking brain states under general anesthesia by using global coherence analysis.
Cimenser, Aylin; Purdon, Patrick L; Pierce, Eric T; Walsh, John L; Salazar-Gomez, Andres F; Harrell, Priscilla G; Tavares-Stoeckel, Casie; Habeeb, Kathleen; Brown, Emery N
2011-05-24
Time and frequency domain analyses of scalp EEG recordings are widely used to track changes in brain states under general anesthesia. Although these analyses have suggested that different spatial patterns are associated with changes in the state of general anesthesia, the extent to which these patterns are spatially coordinated has not been systematically characterized. Global coherence, the ratio of the largest eigenvalue to the sum of the eigenvalues of the cross-spectral matrix at a given frequency and time, has been used to analyze the spatiotemporal dynamics of multivariate time-series. Using 64-lead EEG recorded from human subjects receiving computer-controlled infusions of the anesthetic propofol, we used surface Laplacian referencing combined with spectral and global coherence analyses to track the spatiotemporal dynamics of the brain's anesthetic state. During unconsciousness the spectrograms in the frontal leads showed increasing α (8-12 Hz) and δ power (0-4 Hz) and in the occipital leads δ power greater than α power. The global coherence detected strong coordinated α activity in the occipital leads in the awake state that shifted to the frontal leads during unconsciousness. It revealed a lack of coordinated δ activity during both the awake and unconscious states. Although strong frontal power during general anesthesia-induced unconsciousness--termed anteriorization--is well known, its possible association with strong α range global coherence suggests highly coordinated spatial activity. Our findings suggest that combined spectral and global coherence analyses may offer a new approach to tracking brain states under general anesthesia.
A HUMAN AUTOMATION INTERACTION CONCEPT FOR A SMALL MODULAR REACTOR CONTROL ROOM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Blanc, Katya; Spielman, Zach; Hill, Rachael
Many advanced nuclear power plant (NPP) designs incorporate higher degrees of automation than the existing fleet of NPPs. Automation is being introduced or proposed in NPPs through a wide variety of systems and technologies, such as advanced displays, computer-based procedures, advanced alarm systems, and computerized operator support systems. Additionally, many new reactor concepts, both full scale and small modular reactors, are proposing increased automation and reduced staffing as part of their concept of operations. However, research consistently finds that there is a fundamental tradeoff between system performance with increased automation and reduced human performance. There is a need to addressmore » the question of how to achieve high performance and efficiency of high levels of automation without degrading human performance. One example of a new NPP concept that will utilize greater degrees of automation is the SMR concept from NuScale Power. The NuScale Power design requires 12 modular units to be operated in one single control room, which leads to a need for higher degrees of automation in the control room. Idaho National Laboratory (INL) researchers and NuScale Power human factors and operations staff are working on a collaborative project to address the human performance challenges of increased automation and to determine the principles that lead to optimal performance in highly automated systems. This paper will describe this concept in detail and will describe an experimental test of the concept. The benefits and challenges of the approach will be discussed.« less
Casas-Herrero, Alvaro; Cadore, Eduardo L.; Zambom-Ferraresi, Fabricio; Idoate, Fernando; Millor, Nora; Martínez-Ramirez, Alicia; Gómez, Marisol; Rodriguez-Mañas, Leocadio; Marcellán, Teresa; de Gordoa, Ana Ruiz; Marques, Mário C.
2013-01-01
Abstract This study examined the neuromuscular and functional performance differences between frail oldest old with and without mild cognitive impairment (MCI). In addition, the associations between functional capacities, muscle mass, strength, and power output of the leg muscles were also examined. Forty-three elderly men and women (91.9±4.1 years) were classified into three groups—the frail group, the frail with MCI group (frail+MCI), and the non-frail group. Strength tests were performed for upper and lower limbs. Functional tests included 5-meter habitual gait, timed up-and-go (TUG), dual task performance, balance, and rise from a chair ability. Incidence of falls was assessed using questionnaires. The thigh muscle mass and attenuation were assessed using computed tomography. There were no differences between the frail and frail+MCI groups for all the functional variables analyzed, except in the cognitive score of the TUG with verbal task, which frail showed greater performance than the frail+MCI group. Significant associations were observed between the functional performance, incidence of falls, muscle mass, strength, and power in the frail and frail+MCI groups (r=−0.73 to r=0.83, p<0.01 to p<0.05). These results suggest that the frail oldest old with and without MCI have similar functional and neuromuscular outcomes. Furthermore, the functional outcomes and incidences of falls are associated with muscle mass, strength, and power in the frail elderly population. PMID:23822577
Neic, Aurel; Campos, Fernando O; Prassl, Anton J; Niederer, Steven A; Bishop, Martin J; Vigmond, Edward J; Plank, Gernot
2017-10-01
Anatomically accurate and biophysically detailed bidomain models of the human heart have proven a powerful tool for gaining quantitative insight into the links between electrical sources in the myocardium and the concomitant current flow in the surrounding medium as they represent their relationship mechanistically based on first principles. Such models are increasingly considered as a clinical research tool with the perspective of being used, ultimately, as a complementary diagnostic modality. An important prerequisite in many clinical modeling applications is the ability of models to faithfully replicate potential maps and electrograms recorded from a given patient. However, while the personalization of electrophysiology models based on the gold standard bidomain formulation is in principle feasible, the associated computational expenses are significant, rendering their use incompatible with clinical time frames. In this study we report on the development of a novel computationally efficient reaction-eikonal (R-E) model for modeling extracellular potential maps and electrograms. Using a biventricular human electrophysiology model, which incorporates a topologically realistic His-Purkinje system (HPS), we demonstrate by comparing against a high-resolution reaction-diffusion (R-D) bidomain model that the R-E model predicts extracellular potential fields, electrograms as well as ECGs at the body surface with high fidelity and offers vast computational savings greater than three orders of magnitude. Due to their efficiency R-E models are ideally suitable for forward simulations in clinical modeling studies which attempt to personalize electrophysiological model features.
NASA Astrophysics Data System (ADS)
Neic, Aurel; Campos, Fernando O.; Prassl, Anton J.; Niederer, Steven A.; Bishop, Martin J.; Vigmond, Edward J.; Plank, Gernot
2017-10-01
Anatomically accurate and biophysically detailed bidomain models of the human heart have proven a powerful tool for gaining quantitative insight into the links between electrical sources in the myocardium and the concomitant current flow in the surrounding medium as they represent their relationship mechanistically based on first principles. Such models are increasingly considered as a clinical research tool with the perspective of being used, ultimately, as a complementary diagnostic modality. An important prerequisite in many clinical modeling applications is the ability of models to faithfully replicate potential maps and electrograms recorded from a given patient. However, while the personalization of electrophysiology models based on the gold standard bidomain formulation is in principle feasible, the associated computational expenses are significant, rendering their use incompatible with clinical time frames. In this study we report on the development of a novel computationally efficient reaction-eikonal (R-E) model for modeling extracellular potential maps and electrograms. Using a biventricular human electrophysiology model, which incorporates a topologically realistic His-Purkinje system (HPS), we demonstrate by comparing against a high-resolution reaction-diffusion (R-D) bidomain model that the R-E model predicts extracellular potential fields, electrograms as well as ECGs at the body surface with high fidelity and offers vast computational savings greater than three orders of magnitude. Due to their efficiency R-E models are ideally suitable for forward simulations in clinical modeling studies which attempt to personalize electrophysiological model features.
The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware
NASA Astrophysics Data System (ADS)
Kathiara, Jainik
There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.
Cloud Computing with iPlant Atmosphere.
McKay, Sheldon J; Skidmore, Edwin J; LaRose, Christopher J; Mercer, Andre W; Noutsos, Christos
2013-10-15
Cloud Computing refers to distributed computing platforms that use virtualization software to provide easy access to physical computing infrastructure and data storage, typically administered through a Web interface. Cloud-based computing provides access to powerful servers, with specific software and virtual hardware configurations, while eliminating the initial capital cost of expensive computers and reducing the ongoing operating costs of system administration, maintenance contracts, power consumption, and cooling. This eliminates a significant barrier to entry into bioinformatics and high-performance computing for many researchers. This is especially true of free or modestly priced cloud computing services. The iPlant Collaborative offers a free cloud computing service, Atmosphere, which allows users to easily create and use instances on virtual servers preconfigured for their analytical needs. Atmosphere is a self-service, on-demand platform for scientific computing. This unit demonstrates how to set up, access and use cloud computing in Atmosphere. Copyright © 2013 John Wiley & Sons, Inc.
A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.
NASA Astrophysics Data System (ADS)
Wehner, M. F.; Oliker, L.; Shalf, J.
2008-12-01
Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.
Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems
Teodoro, George; Kurc, Tahsin M.; Pan, Tony; Cooper, Lee A.D.; Kong, Jun; Widener, Patrick; Saltz, Joel H.
2014-01-01
The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches. PMID:25419545
Memristive Mixed-Signal Neuromorphic Systems: Energy-Efficient Learning at the Circuit-Level
Chakma, Gangotree; Adnan, Md Musabbir; Wyer, Austin R.; ...
2017-11-23
Neuromorphic computing is non-von Neumann computer architecture for the post Moore’s law era of computing. Since a main focus of the post Moore’s law era is energy-efficient computing with fewer resources and less area, neuromorphic computing contributes effectively in this research. Here in this paper, we present a memristive neuromorphic system for improved power and area efficiency. Our particular mixed-signal approach implements neural networks with spiking events in a synchronous way. Moreover, the use of nano-scale memristive devices saves both area and power in the system. We also provide device-level considerations that make the system more energy-efficient. The proposed systemmore » additionally includes synchronous digital long term plasticity, an online learning methodology that helps the system train the neural networks during the operation phase and improves the efficiency in learning considering the power consumption and area overhead.« less
Memristive Mixed-Signal Neuromorphic Systems: Energy-Efficient Learning at the Circuit-Level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakma, Gangotree; Adnan, Md Musabbir; Wyer, Austin R.
Neuromorphic computing is non-von Neumann computer architecture for the post Moore’s law era of computing. Since a main focus of the post Moore’s law era is energy-efficient computing with fewer resources and less area, neuromorphic computing contributes effectively in this research. Here in this paper, we present a memristive neuromorphic system for improved power and area efficiency. Our particular mixed-signal approach implements neural networks with spiking events in a synchronous way. Moreover, the use of nano-scale memristive devices saves both area and power in the system. We also provide device-level considerations that make the system more energy-efficient. The proposed systemmore » additionally includes synchronous digital long term plasticity, an online learning methodology that helps the system train the neural networks during the operation phase and improves the efficiency in learning considering the power consumption and area overhead.« less
A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)
2001-01-01
NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.
Positional Role Differences in the Aerobic and Anaerobic Power of Elite Basketball Players.
Pojskić, Haris; Šeparović, Vlatko; Užičanin, Edin; Muratović, Melika; Mačković, Samir
2015-12-22
The aim of the present study was to compare the aerobic and anaerobic power and capacity of elite male basketball players who played multiple positions. Fifty-five healthy players were divided into the following three different subsamples according to their positional role: guards (n = 22), forwards (n = 19) and centers (n = 14). The following three tests were applied to estimate their aerobic and anaerobic power and capacities: the countermovement jump (CMJ), a multistage shuttle run test and the Running-based Anaerobic Sprint Test (RAST). The obtained data were used to calculate the players' aerobic and anaerobic power and capacities. To determine the possible differences between the subjects considering their different positions on the court, one-way analysis of variance (ANOVA) with the Bonferroni post-hoc test for multiple comparisons was used. The results showed that there was a significant difference between the different groups of players in eleven out of sixteen measured variables. Guards and forwards exhibited greater aerobic and relative values of anaerobic power, allowing shorter recovery times and the ability to repeat high intensity, basketball-specific activities. Centers presented greater values of absolute anaerobic power and capacities, permitting greater force production during discrete tasks. Coaches can use these data to create more individualized strength and conditioning programs for different positional roles.
The NASA/USAF arcjet research and technology program
NASA Technical Reports Server (NTRS)
Stone, James R.; Huston, Edward S.
1987-01-01
Direct current arcjets have the potential to provide specific impulses greater than 500 sec with storable propellants, and greater than 1000 sec with hydrogen. This level of performance can provide significant benefits for such applications as orbit transfer, station keeping, orbit change, and maneuvering. The simplicity of the arcjet system and its elements of commonality with state-of-the-art resistojet systems offer a relatively low risk transition to these enhanced levels of performance for low power (0.5 to 1.5 kW) station keeping applications. Arcjets at power levels of 10 to 30 kW are potentially applicable to orbit transfer missions. Furthermore, with the anticipated development of space nuclear power systems, arcjets at greater than 100 kW may become attractive. This paper describes the ongoing NASA/USAF program and describes major recent accomplishments.
Effects of cross-bridge compliance on the force-velocity relationship and muscle power output
Fenwick, Axel J.; Wood, Alexander M.
2017-01-01
Muscles produce force and power by utilizing chemical energy through ATP hydrolysis. During concentric contractions (shortening), muscles generate less force compared to isometric contractions, but consume greater amounts of energy as shortening velocity increases. Conversely, more force is generated and less energy is consumed during eccentric muscle contractions (lengthening). This relationship between force, energy use, and the velocity of contraction has important implications for understanding muscle efficiency, but the molecular mechanisms underlying this behavior remain poorly understood. Here we used spatially-explicit, multi-filament models of Ca2+-regulated force production within a half-sarcomere to simulate how force production, energy utilization, and the number of bound cross-bridges are affected by dynamic changes in sarcomere length. These computational simulations show that cross-bridge binding increased during slow-velocity concentric and eccentric contractions, compared to isometric contractions. Over the full ranges of velocities that we simulated, cross-bridge cycling and energy utilization (i.e. ATPase rates) increased during shortening, and decreased during lengthening. These findings are consistent with the Fenn effect, but arise from a complicated relationship between velocity-dependent cross-bridge recruitment and cross-bridge cycling kinetics. We also investigated how force production, power output, and energy utilization varied with cross-bridge and myofilament compliance, which is impossible to address under typical experimental conditions. These important simulations show that increasing cross-bridge compliance resulted in greater cross-bridge binding and ATPase activity, but less force was generated per cross-bridge and throughout the sarcomere. These data indicate that the efficiency of force production decreases in a velocity-dependent manner, and that this behavior is sensitive to cross-bridge compliance. In contrast, significant effects of myofilament compliance on force production were only observed during isometric contractions, suggesting that changes in myofilament compliance may not influence power output during non-isometric contractions as greatly as changes in cross-bridge compliance. These findings advance our understanding of how cross-bridge and myofilament properties underlie velocity-dependent changes in contractile efficiency during muscle movement. PMID:29284062
Effects of cross-bridge compliance on the force-velocity relationship and muscle power output.
Fenwick, Axel J; Wood, Alexander M; Tanner, Bertrand C W
2017-01-01
Muscles produce force and power by utilizing chemical energy through ATP hydrolysis. During concentric contractions (shortening), muscles generate less force compared to isometric contractions, but consume greater amounts of energy as shortening velocity increases. Conversely, more force is generated and less energy is consumed during eccentric muscle contractions (lengthening). This relationship between force, energy use, and the velocity of contraction has important implications for understanding muscle efficiency, but the molecular mechanisms underlying this behavior remain poorly understood. Here we used spatially-explicit, multi-filament models of Ca2+-regulated force production within a half-sarcomere to simulate how force production, energy utilization, and the number of bound cross-bridges are affected by dynamic changes in sarcomere length. These computational simulations show that cross-bridge binding increased during slow-velocity concentric and eccentric contractions, compared to isometric contractions. Over the full ranges of velocities that we simulated, cross-bridge cycling and energy utilization (i.e. ATPase rates) increased during shortening, and decreased during lengthening. These findings are consistent with the Fenn effect, but arise from a complicated relationship between velocity-dependent cross-bridge recruitment and cross-bridge cycling kinetics. We also investigated how force production, power output, and energy utilization varied with cross-bridge and myofilament compliance, which is impossible to address under typical experimental conditions. These important simulations show that increasing cross-bridge compliance resulted in greater cross-bridge binding and ATPase activity, but less force was generated per cross-bridge and throughout the sarcomere. These data indicate that the efficiency of force production decreases in a velocity-dependent manner, and that this behavior is sensitive to cross-bridge compliance. In contrast, significant effects of myofilament compliance on force production were only observed during isometric contractions, suggesting that changes in myofilament compliance may not influence power output during non-isometric contractions as greatly as changes in cross-bridge compliance. These findings advance our understanding of how cross-bridge and myofilament properties underlie velocity-dependent changes in contractile efficiency during muscle movement.
Analytical Cost Metrics : Days of Future Past
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov
As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less
Benzekry, Sebastian; Tuszynski, Jack A; Rietman, Edward A; Lakka Klement, Giannoula
2015-05-28
The ever-increasing expanse of online bioinformatics data is enabling new ways to, not only explore the visualization of these data, but also to apply novel mathematical methods to extract meaningful information for clinically relevant analysis of pathways and treatment decisions. One of the methods used for computing topological characteristics of a space at different spatial resolutions is persistent homology. This concept can also be applied to network theory, and more specifically to protein-protein interaction networks, where the number of rings in an individual cancer network represents a measure of complexity. We observed a linear correlation of R = -0.55 between persistent homology and 5-year survival of patients with a variety of cancers. This relationship was used to predict the proteins within a protein-protein interaction network with the most impact on cancer progression. By re-computing the persistent homology after computationally removing an individual node (protein) from the protein-protein interaction network, we were able to evaluate whether such an inhibition would lead to improvement in patient survival. The power of this approach lied in its ability to identify the effects of inhibition of multiple proteins and in the ability to expose whether the effect of a single inhibition may be amplified by inhibition of other proteins. More importantly, we illustrate specific examples of persistent homology calculations, which correctly predict the survival benefit observed effects in clinical trials using inhibitors of the identified molecular target. We propose that computational approaches such as persistent homology may be used in the future for selection of molecular therapies in clinic. The technique uses a mathematical algorithm to evaluate the node (protein) whose inhibition has the highest potential to reduce network complexity. The greater the drop in persistent homology, the greater reduction in network complexity, and thus a larger potential for survival benefit. We hope that the use of advanced mathematics in medicine will provide timely information about the best drug combination for patients, and avoid the expense associated with an unsuccessful clinical trial, where drug(s) did not show a survival benefit.
Motoyoshi, Mitsuru; Uchida, Yasuki; Inaba, Mizuki; Ejima, Ken-Ichiro; Honda, Kazuya; Shimizu, Noriyoshi
2016-07-01
Placement torque and damping capacity may increase when the orthodontic anchor screws make contact with an adjacent root. If this is the case, root contact can be inferred from the placement torque and damping capacity. The purpose of this study was to verify the detectability of root proximity of the screws by placement torque and damping capacity. For this purpose, we investigated the relationship among placement torque, damping capacity, and screw-root proximity. The placement torque, damping capacity, and root proximity of 202 screws (diameter, 1.6 mm; length, 8.0 mm) were evaluated in 110 patients (31 male, 79 female; mean age, 21.3 ± 6.9 years). Placement torque was measured using a digital torque tester, damping capacity was measured with a Periotest device (Medizintechnik Gulden, Modautal, Germany), and root contact was judged using cone-beam computed tomography images. The rate of root contact was 18.3%. Placement torque and damping capacity were 7.8 N·cm and 3.8, respectively. The placement torque of screws with root contact was greater than that of screws with no root contact (P <0.05; effect size, 0.44; power, <0.8). Damping capacity of screws with root contact was significantly greater than that of screws with no root contact (P <0.01; effect size, >0.5; power, >0.95). It was suggested that the damping capacity is related to root contact. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Motes, Michael A; Rao, Neena K; Shokri-Kojori, Ehsan; Chiang, Hsueh-Sheng; Kraut, Michael A; Hart, John
2017-01-01
Computer-based assessment of many cognitive processes (eg, anticipatory and response readiness processes) requires the use of invariant stimulus display times (SDT) and intertrial intervals (ITI). Although designs with invariant SDTs and ITIs have been used in functional magnetic resonance imaging (fMRI) research, such designs are problematic for fMRI studies because of collinearity issues. This study examined regressor modulation with trial-level reaction times (RT) as a method for improving signal detection in a go/no-go task with invariant SDTs and ITIs. The effects of modulating the go regressor were evaluated with respect to the detection of BOLD signal-change for the no-go condition. BOLD signal-change to no-go stimuli was examined when the go regressor was based on a (a) canonical hemodynamic response function (HRF), (b) RT-based amplitude-modulated (AM) HRF, and (c) RT-based amplitude and duration modulated (A&DM) HRF. Reaction time–based modulation reduced the collinearity between the go and no-go regressors, with A&DM producing the greatest reductions in correlations between the regressors, and greater reductions in the correlations between regressors were associated with longer mean RTs and greater RT variability. Reaction time–based modulation increased statistical power for detecting group-level no-go BOLD signal-change across a broad set of brain regions. The findings show the efficacy of using regressor modulation to increase power in detecting BOLD signal-change in fMRI studies in which circumstances dictate the use of temporally invariant stimulus presentations. PMID:29276390
Motes, Michael A; Rao, Neena K; Shokri-Kojori, Ehsan; Chiang, Hsueh-Sheng; Kraut, Michael A; Hart, John
2017-01-01
Computer-based assessment of many cognitive processes (eg, anticipatory and response readiness processes) requires the use of invariant stimulus display times (SDT) and intertrial intervals (ITI). Although designs with invariant SDTs and ITIs have been used in functional magnetic resonance imaging (fMRI) research, such designs are problematic for fMRI studies because of collinearity issues. This study examined regressor modulation with trial-level reaction times (RT) as a method for improving signal detection in a go / no-go task with invariant SDTs and ITIs. The effects of modulating the go regressor were evaluated with respect to the detection of BOLD signal-change for the no-go condition. BOLD signal-change to no-go stimuli was examined when the go regressor was based on a (a) canonical hemodynamic response function (HRF), (b) RT-based amplitude-modulated (AM) HRF, and (c) RT-based amplitude and duration modulated (A&DM) HRF. Reaction time-based modulation reduced the collinearity between the go and no-go regressors, with A&DM producing the greatest reductions in correlations between the regressors, and greater reductions in the correlations between regressors were associated with longer mean RTs and greater RT variability. Reaction time-based modulation increased statistical power for detecting group-level no-go BOLD signal-change across a broad set of brain regions. The findings show the efficacy of using regressor modulation to increase power in detecting BOLD signal-change in fMRI studies in which circumstances dictate the use of temporally invariant stimulus presentations.
Complex Engineered Systems: A New Paradigm
NASA Astrophysics Data System (ADS)
Mina, Ali A.; Braha, Dan; Bar-Yam, Yaneer
Human history is often seen as an inexorable march towards greater complexity — in ideas, artifacts, social, political and economic systems, technology, and in the structure of life itself. While we do not have detailed knowledge of ancient times, it is reasonable to conclude that the average resident of New York City today faces a world of much greater complexity than the average denizen of Carthage or Tikal. A careful consideration of this change, however, suggests that most of it has occurred recently, and has been driven primarily by the emergence of technology as a force in human life. In the 4000 years separating the Indus Valley Civilization from 18th century Europe, human transportation evolved from the bullock cart to the hansom, and the methods of communication used by George Washington did not differ significantly from those used by Alexander or Rameses. The world has moved radically towards greater complexity in the last two centuries. We have moved from buggies and letter couriers to airplanes and the Internet — an increase in capacity, and through its diversity also in complexity, orders of magnitude greater than that accumulated through the rest of human history. In addition to creating iconic artifacts — the airplane, the car, the computer, the television, etc. — this change has had a profound effect on the scope of experience by creating massive, connected and multiultra- level systems — traffic networks, power grids, markets, multinational corporations — that defy analytical understanding and seem to have a life of their own. This is where complexity truly enters our lives.
Wingood, Gina M; DiClemente, Ralph J; Villamizar, Kira; Er, Deja L; DeVarona, Martina; Taveras, Janelle; Painter, Thomas M; Lang, Delia L; Hardin, James W; Ullah, Evelyn; Stallworth, JoAna; Purcell, David W; Jean, Reynald
2011-12-01
We developed and assessed AMIGAS (Amigas, Mujeres Latinas, Inform andonos, Gui andonos, y Apoy andonos contra el SIDA [friends, Latina women, informing each other, guiding each other, and supporting each other against AIDS]), a culturally congruent HIV prevention intervention for Latina women adapted from SiSTA (Sistas Informing Sistas about Topics on AIDS), an intervention for African American women. We recruited 252 Latina women aged 18 to 35 years in Miami, Florida, in 2008 to 2009 and randomized them to the 4-session AMIGAS intervention or a 1-session health intervention. Participants completed audio computer-assisted self-interviews at baseline and follow-up. Over the 6-month follow-up, AMIGAS participants reported more consistent condom use during the past 90 (adjusted odds ratio [AOR] = 4.81; P < .001) and 30 (AOR = 3.14; P < .001) days and at last sexual encounter (AOR = 2.76; P < .001), and a higher mean percentage condom use during the past 90 (relative change = 55.7%; P < .001) and 30 (relative change = 43.8%; P < .001) days than did comparison participants. AMIGAS participants reported fewer traditional views of gender roles (P = .008), greater self-efficacy for negotiating safer sex (P < .001), greater feelings of power in relationships (P = .02), greater self-efficacy for using condoms (P < .001), and greater HIV knowledge (P = .009) and perceived fewer barriers to using condoms (P < .001). Our results support the efficacy of this linguistically and culturally adapted HIV intervention among ethnically diverse, predominantly foreign-born Latina women.
DiClemente, Ralph J.; Villamizar, Kira; Er, Deja L.; DeVarona, Martina; Taveras, Janelle; Painter, Thomas M.; Lang, Delia L.; Hardin, James W.; Ullah, Evelyn; Stallworth, JoAna; Purcell, David W.; Jean, Reynald
2011-01-01
Objectives. We developed and assessed AMIGAS (Amigas, Mujeres Latinas, Inform andonos, Gui andonos, y Apoy andonos contra el SIDA [friends, Latina women, informing each other, guiding each other, and supporting each other against AIDS]), a culturally congruent HIV prevention intervention for Latina women adapted from SiSTA (Sistas Informing Sistas about Topics on AIDS), an intervention for African American women. Methods. We recruited 252 Latina women aged 18 to 35 years in Miami, Florida, in 2008 to 2009 and randomized them to the 4-session AMIGAS intervention or a 1-session health intervention. Participants completed audio computer-assisted self-interviews at baseline and follow-up. Results. Over the 6-month follow-up, AMIGAS participants reported more consistent condom use during the past 90 (adjusted odds ratio [AOR] = 4.81; P < .001) and 30 (AOR = 3.14; P < .001) days and at last sexual encounter (AOR = 2.76; P < .001), and a higher mean percentage condom use during the past 90 (relative change = 55.7%; P < .001) and 30 (relative change = 43.8%; P < .001) days than did comparison participants. AMIGAS participants reported fewer traditional views of gender roles (P = .008), greater self-efficacy for negotiating safer sex (P < .001), greater feelings of power in relationships (P = .02), greater self-efficacy for using condoms (P < .001), and greater HIV knowledge (P = .009) and perceived fewer barriers to using condoms (P < .001). Conclusions. Our results support the efficacy of this linguistically and culturally adapted HIV intervention among ethnically diverse, predominantly foreign-born Latina women. PMID:22021297
GATE Monte Carlo simulation in a cloud computing environment
NASA Astrophysics Data System (ADS)
Rowedder, Blake Austin
The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.
Naval Open Architecture Machinery Control Systems for Next Generation Integrated Power Systems
2012-05-01
PORTABLE) OS / RTOS ADAPTATION MIDDLEWARE (FOR OS PORTABILITY) MACHINERY CONTROLLER FRAMEWORK MACHINERY CONTROL SYSTEM SERVICES POWER CONTROL SYSTEM...SERVICES SHIP SYSTEM SERVICES TTY 0 TTY N … OPERATING SYSTEM ( OS / RTOS ) COMPUTER HARDWARE UDP IP TCP RAW DEV 0 DEV N … POWER MANAGEMENT CONTROLLER...operating systems (DOS, Windows, Linux, OS /2, QNX, SCO Unix ...) COMPUTERS: ISA compatible motherboards, workstations and portables (Compaq, Dell
JPRS Report, Soviet Union, Foreign Military Review, No. 8, August 1987
1988-01-28
Hinkley Point (1.5 million) and Hartlepool (1.3 million). In recent years the country has begun building large hydro- electric pumped storage power ...antenna 6. Interface equipment 7. Data transmission line terminal 8. Computer 9. Power supply plant control station 10. Radio-relay station terminals... stations and data transmission line, interface equipment, and power distribution unit (Fig. 3). The parallel computer, which performs operations on
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, U.A.; Baumle, B.; Kohler, P.
1992-10-01
Music, a DSP-based system with a parallel distributed-memory architecture, provides enormous computing power yet retains the flexibility of a general-purpose computer. Reaching a peak performance of 2.7 Gflops at a significantly lower cost, power consumption, and space requirement than conventional supercomputers, Music is well suited to computationally intensive applications such as neural network simulation. 12 refs., 9 figs., 2 tabs.
Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.
Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less
Squatting Exercises in Older Adults: Kinematic and Kinetic Comparisons
FLANAGAN, SEAN; SALEM, GEORGE J.; WANG, MAN-YING; SANKER, SERENA E.; GREENDALE, GAIL A.
2012-01-01
Purpose Squatting activities may be used, within exercise programs, to preserve physical function in older adults. This study characterized the lower-extremity peak joint angles, peak moments, powers, work, impulse, and muscle recruitment patterns (electromyographic; EMG) associated with two types of squatting activities in elders. Methods Twenty-two healthy, older adults (ages 70–85) performed three trials each of: 1) a squat to a self-selected depth (normal squat; SQ) and 2) a squat onto a chair with a standardized height of 43.8 cm (chair squat; CSQ). Descending and ascending phase joint kinematics and kinetics were obtained using a motion analysis system and inverse dynamics techniques. Results were averaged across the three trials. A 2 × 2 (activity × phase) ANOVA with repeated measures was used to examine the biomechanical differences among the two activities and phases. EMG temporal characteristics were qualitatively examined. Results CSQ generated greater hip flexion angles, peak moments, power, and work, whereas SQ generated greater knee and ankle flexion angles, peak moments, power, and work. SQ generated a greater knee extensor impulse, a greater plantar flexor impulse and a greater total support impulse. The EMG temporal patterns were consistent with the kinetic data. Conclusions The results suggest that, with older adults, CSQ places greater demand on the hip extensors, whereas SQ places greater demand on the knee extensors and ankle plantar flexors. Clinicians may use these discriminate findings to more effectively target specific lower-extremity muscle groups when prescribing exercise for older adults. PMID:12673148
Hot Chips and Hot Interconnects for High End Computing Systems
NASA Technical Reports Server (NTRS)
Saini, Subhash
2005-01-01
I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).
Optimum Drop Jump Height in Division III Athletes: Under 75% of Vertical Jump Height.
Peng, Hsien-Te; Khuat, Cong Toai; Kernozek, Thomas W; Wallace, Brian J; Lo, Shin-Liang; Song, Chen-Yi
2017-10-01
Our purpose was to evaluate the vertical ground reaction force, impulse, moments and powers of hip, knee and ankle joints, contact time, and jump height when performing a drop jump from different drop heights based on the percentage of a performer's maximum vertical jump height (MVJH). Fifteen male Division III athletes participated voluntarily. Eleven synchronized cameras and two force platforms were used to collect data. One-way repeated-measures analysis of variance tests were used to examine the differences between drop heights. The maximum hip, knee and ankle power absorption during 125%MVJH and 150%MVJH were greater than those during 75%MVJH. The impulse during landing at 100%MVJH, 125%MVJH and 150%MVJH were greater than 75%MVJH. The vertical ground reaction force during 150%MVJH was greater than 50%MVJH, 75%MVJH and 100%MVJH. Drop height below 75%MVJH had the most merits for increasing joint power output while having a lower impact force, impulse and joint power absorption. Drop height of 150%MVJH may not be desirable as a high-intensity stimulus due to the much greater impact force, increasing the risk of injury, without increasing jump height performance. © Georg Thieme Verlag KG Stuttgart · New York.
76 FR 1410 - Privacy Act of 1974; Computer Matching Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-10
...; Computer Matching Program AGENCY: Defense Manpower Data Center (DMDC), DoD. ACTION: Notice of a Computer... administrative burden, constitute a greater intrusion of the individual's privacy, and would result in additional... Liaison Officer, Department of Defense. Notice of a Computer Matching Program Among the Defense Manpower...
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Koga, Dennis (Technical Monitor)
2000-01-01
In this first of two papers, strong limits on the accuracy of physical computation are established. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out any computational task in the subset of such tasks that can be posed to C. This result holds whether the computational tasks concern a system that is physically isolated from C, or instead concern a system that is coupled to C. As a particular example, this result means that there cannot be a physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly 'processing information faster than the universe does'. The results also mean that there cannot exist an infallible, general-purpose observation apparatus, and that there cannot be an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - a definition of 'physical computation' - is needed to address the issues considered in these papers. While this definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. The second in this pair of papers presents a preliminary exploration of some of this mathematical structure, including in particular that of prediction complexity, which is a 'physical computation analogue' of algorithmic information complexity. It is proven in that second paper that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike algorithmic information complexity), in that there is one and only version of it that can be applicable throughout our universe.
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Koga, Dennis (Technical Monitor)
2000-01-01
In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task, a bound similar to the "encoding" bound governing how much the algorithm information complexity of a Turing machine calculation can differ for two reference universal Turing machines. Finally, it is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike algorithmic information complexity), in that there is one and only version of it that can be applicable throughout our universe.
Plasmonic computing of spatial differentiation
NASA Astrophysics Data System (ADS)
Zhu, Tengfeng; Zhou, Yihan; Lou, Yijie; Ye, Hui; Qiu, Min; Ruan, Zhichao; Fan, Shanhui
2017-05-01
Optical analog computing offers high-throughput low-power-consumption operation for specialized computational tasks. Traditionally, optical analog computing in the spatial domain uses a bulky system of lenses and filters. Recent developments in metamaterials enable the miniaturization of such computing elements down to a subwavelength scale. However, the required metamaterial consists of a complex array of meta-atoms, and direct demonstration of image processing is challenging. Here, we show that the interference effects associated with surface plasmon excitations at a single metal-dielectric interface can perform spatial differentiation. And we experimentally demonstrate edge detection of an image without any Fourier lens. This work points to a simple yet powerful mechanism for optical analog computing at the nanoscale.
Cluster Computing for Embedded/Real-Time Systems
NASA Technical Reports Server (NTRS)
Katz, D.; Kepner, J.
1999-01-01
Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.
Operate a Nuclear Power Plant.
ERIC Educational Resources Information Center
Frimpter, Bonnie J.; And Others
1983-01-01
Describes classroom use of a computer program originally published in Creative Computing magazine. "The Nuclear Power Plant" (runs on Apple II with 48K memory) simulates the operating of a nuclear generating station, requiring students to make decisions as they assume the task of managing the plant. (JN)
The Power of Computer-aided Tomography to Investigate Marine Benthic Communities
Utilization of Computer-aided-Tomography (CT) technology is a powerful tool to investigate benthic communities in aquatic systems. In this presentation, we will attempt to summarize our 15 years of experience in developing specific CT methods and applications to marine benthic co...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel
Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less
Comparisons of selected laser beam power missions to conventionally powered missions
NASA Technical Reports Server (NTRS)
Bozek, John M.; Oleson, Steven R.; Landis, Geoffrey A.; Stavnes, Mark W.
1993-01-01
Earth-based laser sites beaming laser power to space assets have shown benefits over competing power system concepts for specific missions. Missions analyzed in this report that show benefits of laser beam power are low Earth orbit (LEO) to geosynchronous Earth orbit (GEO) transfer, LEO to low lunar orbit (LLO) cargo missions, and lunar-base power. Both laser- and solar-powered orbit-transfer vehicles (OTV's) make a 'tug' concept viable, which substantially reduces cumulative initial mass to LEO in comparison to chemical propulsion concepts. Lunar cargo missions utilizing laser electric propulsion from Earth-orbit to LLO show substantial mass saving to LEO over chemical propulsion systems. Lunar-base power system options were compared on a landed-mass basis. Photovoltaics with regenerative fuel cells, reactor-based systems, and laser-based systems were sized to meet a generic lunar-base power profile. A laser-based system begins to show landed mass benefits over reactor-based systems when proposed production facilities on the Moon require power levels greater than approximately 300 kWe. Benefit/cost ratios of laser power systems for an OTV, both to GEO and LLO, and for a lunar base were calculated to be greater than 1.
NASA Technical Reports Server (NTRS)
Huebner, Lawrence D.; Tatum, Kenneth E.
1991-01-01
Computational results are presented for three issues pertinent to hypersonic, airbreathing vehicles employing scramjet exhaust flow simulation. The first issue consists of a comparison of schlieren photographs obtained on the aftbody of a cruise missile configuration under powered conditions with two-dimensional computational solutions. The second issue presents the powered aftbody effects of modeling the inlet with a fairing to divert the external flow as compared to an operating flow-through inlet on a generic hypersonic vehicle. Finally, a comparison of solutions examining the potential of testing powered configurations in a wind-off, instead of a wind-on, environment, indicate that, depending on the extent of the three-dimensional plume, it may be possible to test aftbody powered hypersonic, airbreathing configurations in a wind-off environment.
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.
2013-03-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2011-11-15
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Real-time performance monitoring and management system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2007-06-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Using SRAM Based FPGAs for Power-Aware High Performance Wireless Sensor Networks
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today’s applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements. PMID:22736971
Using SRAM based FPGAs for power-aware high performance wireless sensor networks.
Valverde, Juan; Otero, Andres; Lopez, Miguel; Portilla, Jorge; de la Torre, Eduardo; Riesgo, Teresa
2012-01-01
While for years traditional wireless sensor nodes have been based on ultra-low power microcontrollers with sufficient but limited computing power, the complexity and number of tasks of today's applications are constantly increasing. Increasing the node duty cycle is not feasible in all cases, so in many cases more computing power is required. This extra computing power may be achieved by either more powerful microcontrollers, though more power consumption or, in general, any solution capable of accelerating task execution. At this point, the use of hardware based, and in particular FPGA solutions, might appear as a candidate technology, since though power use is higher compared with lower power devices, execution time is reduced, so energy could be reduced overall. In order to demonstrate this, an innovative WSN node architecture is proposed. This architecture is based on a high performance high capacity state-of-the-art FPGA, which combines the advantages of the intrinsic acceleration provided by the parallelism of hardware devices, the use of partial reconfiguration capabilities, as well as a careful power-aware management system, to show that energy savings for certain higher-end applications can be achieved. Finally, comprehensive tests have been done to validate the platform in terms of performance and power consumption, to proof that better energy efficiency compared to processor based solutions can be achieved, for instance, when encryption is imposed by the application requirements.
To Jump or Cycle? Monitoring Neuromuscular Function in Rugby Union Players.
Roe, Gregory; Darrall-Jones, Joshua; Till, Kevin; Phibbs, Padraic; Read, Dale; Weakley, Jonathon; Jones, Ben
2017-05-01
To evaluate changes in performance of a 6-s cycle-ergometer test (CET) and countermovement jump (CMJ) during a 6-wk training block in professional rugby union players. Twelve young professional rugby union players performed 2 CETs and CMJs on the 1st and 4th mornings of every week before the commencement of daily training during a 6-wk training block. Standardized changes in the highest score of 2 CET and CMJ efforts were assessed using linear mixed modeling and magnitude-based inferences. After increases in training load during wk 3 to 5, moderate decreases in CMJ peak and mean power and small decreases in flight time were observed during wk 5 and 6 that were very likely to almost certainly greater than the smallest worthwhile change (SWC), suggesting neuromuscular fatigue. However, only small decreases, possibly greater than the SWC, were observed in CET peak power. Changes in CMJ peak and mean power were moderately greater than in CET peak power during this period, while the difference between flight time and CET peak power was small. The greater weekly changes in CMJ metrics in comparison with CET may indicate differences in the capacities of these tests to measure training-induced lower-body neuromuscular fatigue in rugby union players. However, future research is needed to ascertain the specific modes of training that elicit changes in CMJ and CET to determine the efficacy of each test for monitoring neuromuscular function in rugby union players.
Power corrupts co-operation: cognitive and motivational effects in a double EEG paradigm.
Kanso, Riam; Hewstone, Miles; Hawkins, Erin; Waszczuk, Monika; Nobre, Anna Christina
2014-02-01
This study investigated the effect of interpersonal power on co-operative performance. We used a paired electro-encephalogram paradigm: pairs of participants performed an attention task, followed by feedback indicating monetary loss or gain on every trial. Participants were randomly allocated to the power-holder, subordinate or neutral group by creating different levels of control over how a joint monetary reward would be allocated. We found that power was associated with reduced behavioural accuracy. Event-related potential analysis showed that power-holders devoted less motivational resources to their targets than did subordinates or neutrals, but did not differ at the level of early conflict detection. Their feedback potential results showed a greater expectation of rewards but reduced subjective magnitude attributed to losses. Subordinates, on the other hand, were asymmetrically sensitive to power-holders' targets. They expected fewer rewards, but attributed greater significance to losses. Our study shows that power corrupts balanced co-operation with subordinates.
Power corrupts co-operation: cognitive and motivational effects in a double EEG paradigm
Kanso, Riam; Hewstone, Miles; Hawkins, Erin; Waszczuk, Monika; Nobre, Anna Christina
2014-01-01
This study investigated the effect of interpersonal power on co-operative performance. We used a paired electro-encephalogram paradigm: pairs of participants performed an attention task, followed by feedback indicating monetary loss or gain on every trial. Participants were randomly allocated to the power-holder, subordinate or neutral group by creating different levels of control over how a joint monetary reward would be allocated. We found that power was associated with reduced behavioural accuracy. Event-related potential analysis showed that power-holders devoted less motivational resources to their targets than did subordinates or neutrals, but did not differ at the level of early conflict detection. Their feedback potential results showed a greater expectation of rewards but reduced subjective magnitude attributed to losses. Subordinates, on the other hand, were asymmetrically sensitive to power-holders’ targets. They expected fewer rewards, but attributed greater significance to losses. Our study shows that power corrupts balanced co-operation with subordinates. PMID:23160813
Advanced computer architecture for large-scale real-time applications.
DOT National Transportation Integrated Search
1973-04-01
Air traffic control automation is identified as a crucial problem which provides a complex, real-time computer application environment. A novel computer architecture in the form of a pipeline associative processor is conceived to achieve greater perf...
Computer grading of examinations
NASA Technical Reports Server (NTRS)
Frigerio, N. A.
1969-01-01
A method, using IBM cards and computer processing, automates examination grading and recording and permits use of computational problems. The student generates his own answers, and the instructor has much greater freedom in writing questions than is possible with multiple choice examinations.
Analysis of large power systems
NASA Technical Reports Server (NTRS)
Dommel, H. W.
1975-01-01
Computer-oriented power systems analysis procedures in the electric utilities are surveyed. The growth of electric power systems is discussed along with the solution of sparse network equations, power flow, and stability studies.
1990-12-01
small powerful computers to businesses and homes on an international scale (29:74). Relatively low cost, high computing power , and ease of operation were...is performed. In large part, today’s AF IM professional has been inundated with powerful new technologies which were rapidly introduced and inserted...state that, "In a survey of five years of MIS research, we fouind the averane levels of statistical power to be relatively low (5:104). In their own
NASA Technical Reports Server (NTRS)
Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.
1974-01-01
The Shuttle Electric Power System Analysis SEPS computer program which performs detailed load analysis including predicting energy demands and consumables requirements of the shuttle electric power system along with parameteric and special case studies on the shuttle electric power system is described. The functional flow diagram of the SEPS program is presented along with data base requirements and formats, procedure and activity definitions, and mission timeline input formats. Distribution circuit input and fixed data requirements are included. Run procedures and deck setups are described.
Accelerating artificial intelligence with reconfigurable computing
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw
Reconfigurable computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated by placing the computationally intense portions of an algorithm into reconfigurable hardware. Reconfigurable computing combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be changed over the lifetime of the system. Similar to an ASIC, reconfigurable systems provide a method to map circuits into hardware. Reconfigurable systems therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Such a field, where there is many different algorithms which can be accelerated, is an artificial intelligence. This paper presents example hardware implementations of Artificial Neural Networks, Genetic Algorithms and Expert Systems.
Invited presentation at Dalton College, Dalton, GA to the Alliance for Innovation & Sustainability, April 20, 2017. U.S. EPA’s Computational Toxicology Program: Innovation Powered by Chemistry It is estimated that tens of thousands of commercial and industrial chemicals are ...
Minimization search method for data inversion
NASA Technical Reports Server (NTRS)
Fymat, A. L.
1975-01-01
Technique has been developed for determining values of selected subsets of independent variables in mathematical formulations. Required computation time increases with first power of the number of variables. This is in contrast with classical minimization methods for which computational time increases with third power of the number of variables.
The mass of massive rover software
NASA Technical Reports Server (NTRS)
Miller, David P.
1993-01-01
A planetary rover, like a spacecraft, must be fully self contained. Once launched, a rover can only receive information from its designers, and if solar powered, power from the Sun. As the distance from Earth increases, and the demands for power on the rover increase, there is a serious tradeoff between communication and computation. Both of these subsystems are very power hungry, and both can be the major driver of the rover's power subsystem, and therefore the minimum mass and size of the rover. This situation and software techniques that can be used to reduce the requirements on both communication and computation, allowing the overall robot mass to be greatly reduced, are discussed.
47 CFR 25.204 - Power limits for earth stations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... coequally with terrestrial radio communication services, the equivalent isotropically radiated power... below it. (b) In bands shared coequally with terrestrial radiocommunication services, the equivalent... greater than 5° there shall be no restriction as to the equivalent isotropically radiated power...
Mapping suitability areas for concentrated solar power plants using remote sensing data
Omitaomu, Olufemi A.; Singh, Nagendra; Bhaduri, Budhendra L.
2015-05-14
The political push to increase power generation from renewable sources such as solar energy requires knowing the best places to site new solar power plants with respect to the applicable regulatory, operational, engineering, environmental, and socioeconomic criteria. Therefore, in this paper, we present applications of remote sensing data for mapping suitability areas for concentrated solar power plants. Our approach uses digital elevation model derived from NASA s Shuttle Radar Topographic Mission (SRTM) at a resolution of 3 arc second (approx. 90m resolution) for estimating global solar radiation for the study area. Then, we develop a computational model built on amore » Geographic Information System (GIS) platform that divides the study area into a grid of cells and estimates site suitability value for each cell by computing a list of metrics based on applicable siting requirements using GIS data. The computed metrics include population density, solar energy potential, federal lands, and hazardous facilities. Overall, some 30 GIS data are used to compute eight metrics. The site suitability value for each cell is computed as an algebraic sum of all metrics for the cell with the assumption that all metrics have equal weight. Finally, we color each cell according to its suitability value. Furthermore, we present results for concentrated solar power that drives a stream turbine and parabolic mirror connected to a Stirling Engine.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donald D Dudenhoeffer; Burce P Hallbert
Instrumentation, Controls, and Human-Machine Interface (ICHMI) technologies are essential to ensuring delivery and effective operation of optimized advanced Generation IV (Gen IV) nuclear energy systems. In 1996, the Watts Bar I nuclear power plant in Tennessee was the last U.S. nuclear power plant to go on line. It was, in fact, built based on pre-1990 technology. Since this last U.S. nuclear power plant was designed, there have been major advances in the field of ICHMI systems. Computer technology employed in other industries has advanced dramatically, and computing systems are now replaced every few years as they become functionally obsolete. Functionalmore » obsolescence occurs when newer, more functional technology replaces or supersedes an existing technology, even though an existing technology may well be in working order.Although ICHMI architectures are comprised of much of the same technology, they have not been updated nearly as often in the nuclear power industry. For example, some newer Personal Digital Assistants (PDAs) or handheld computers may, in fact, have more functionality than the 1996 computer control system at the Watts Bar I plant. This illustrates the need to transition and upgrade current nuclear power plant ICHMI technologies.« less
A handheld computer as part of a portable in vivo knee joint load monitoring system
Szivek, JA; Nandakumar, VS; Geffre, CP; Townsend, CP
2009-01-01
In vivo measurement of loads and pressures acting on articular cartilage in the knee joint during various activities and rehabilitative therapies following focal defect repair will provide a means of designing activities that encourage faster and more complete healing of focal defects. It was the goal of this study to develop a totally portable monitoring system that could be used during various activities and allow continuous monitoring of forces acting on the knee. In order to make the monitoring system portable, a handheld computer with custom software, a USB powered miniature wireless receiver and a battery-powered coil were developed to replace a currently used computer, AC powered bench top receiver and power supply. A Dell handheld running Windows Mobile operating system(OS) programmed using Labview was used to collect strain measurements. Measurements collected by the handheld based system connected to the miniature wireless receiver were compared with the measurements collected by a hardwired system and a computer based system during bench top testing and in vivo testing. The newly developed handheld based system had a maximum accuracy of 99% when compared to the computer based system. PMID:19789715
Hibi, N; Fujinaga, H; Ishii, K
1996-01-01
Work and power outputs during short-term, maximal exertion on a friction loaded cycle ergometer are usually calculated from the friction force applied to the flywheel. The inertia of the flywheel is sometimes taken into consideration, but the effects of internal resistances and other factors have been ignored. The purpose of this study was to estimate their effects by comparing work or power output determined from the force exerted on the pedals (pedalling force) with work or power output determined from the friction force and the moment of inertia of the rotational parts. A group of 22 male college students accelerated a cycle ergometer as rapidly as possible for 3 s. The total work output determined from the pedalling force (TWp) was significantly greater than that calculated from the friction force and the moment of inertia (TWf). Power output determined from the pedalling force during each pedal stroke (SPp) was also significantly greater than that calculated from the friction force and the moment of inertia. Percentage difference (% diff), defined by % diff = ¿(TWp - TWf)/TWf¿ x 100, ranged from 16.8% to 49.3% with a mean value of 30.8 (SD 9.1)%. It was observed that % diff values were higher in subjects with greater TWp or greater maximal SPp. These results would indicate that internal resistances and other factors, such as the deformation of the chain and the vibrations of the entire system, may have significant effects on the measurements of work and power outputs. The effects appear to depend on the magnitudes of pedalling force and pedal velocity.
Crewther, Blair T; Cronin, John; Keogh, Justin W L
2008-11-01
This study examined the effect of volume, technique, and load upon single-repetition and total-repetition kinematics and kinetics during three loading schemes. Eleven recreationally trained males each performed a power (8 sets of 6 repetitions at 45% of one-repetition maximum [1RM], 3-minute rest periods, explosive and ballistic movements), hypertrophy (10 sets of 10 repetitions at 75% 1RM, 2-minute rest periods, controlled movements), and maximal strength (6 sets of 4 repetitions at 88% 1RM, 4-minute rest periods, explosive intent) scheme involving squats. Examination of repetition data showed that load intensity (% 1RM) generally had a direct effect on forces, contraction times, impulses, and work (i.e., increasing with load), whereas power varied across loads (p < 0.001). However, total-repetition forces, contraction times, impulses, work, and power were all greater in the hypertrophy scheme (p < 0.001), because of the greater number of repetitions performed (volume) as well as lifting technique. No differences in total forces were found between the equal-volume power and maximal strength schemes, but the former did produce greater total contraction times, work, and power (p < 0.001), which may also be attributed to repetition and technique differences. Total impulses were the only variable greater in the maximal strength scheme (p < 0.001). Thus, the interaction of load, volume, and technique plays an important role in determining the mechanical responses (stimuli) afforded by these workouts. These findings may explain disparities cited within research, regarding the effectiveness of different loading strategies for hypertrophy, maximal strength, and power adaptation.
Automatic Thermal Infrared Panoramic Imaging Sensor
2006-11-01
hibernation, in which power supply to the server computer , the wireless network hardware, the GPS receiver, and the electronic compass / tilt sensor...prototype. At the operator’s command on the client laptop, the receiver wakeup device on the server side will switch on the ATX power supply at the...server, to resume the power supply to all the APTIS components. The embedded computer will resume all of the functions it was performing when put
A digital computer simulation and study of a direct-energy-transfer power-conditioning system
NASA Technical Reports Server (NTRS)
Burns, W. W., III; Owen, H. A., Jr.; Wilson, T. G.; Rodriguez, G. E.; Paulkovich, J.
1974-01-01
A digital computer simulation technique, which can be used to study such composite power-conditioning systems, was applied to a spacecraft direct-energy-transfer power-processing system. The results obtained duplicate actual system performance with considerable accuracy. The validity of the approach and its usefulness in studying various aspects of system performance such as steady-state characteristics and transient responses to severely varying operating conditions are demonstrated experimentally.
Could Transparency Bring Economic Diversity?
ERIC Educational Resources Information Center
Kahlenberg, Richard D.
2007-01-01
The Spellings Commission report calls for greater access to higher education for low- and moderate-income students, greater transparency in the way higher education works and greater accountability for producing results. These recommendations are all significant in their own right, but the three concepts also converge to provide powerful support…
Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi
NASA Astrophysics Data System (ADS)
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad
2015-05-01
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).
Electronic neutron sources for compensated porosity well logging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, A. X.; Antolak, A. J.; Leung, K. -N.
2012-08-01
The viability of replacing Americium–Beryllium (Am–Be) radiological neutron sources in compensated porosity nuclear well logging tools with D–T or D–D accelerator-driven neutron sources is explored. The analysis consisted of developing a model for a typical well-logging borehole configuration and computing the helium-3 detector response to varying formation porosities using three different neutron sources (Am–Be, D–D, and D–T). The results indicate that, when normalized to the same source intensity, the use of a D–D neutron source has greater sensitivity for measuring the formation porosity than either an Am–Be or D–T source. The results of the study provide operational requirements that enablemore » compensated porosity well logging with a compact, low power D–D neutron generator, which the current state-of-the-art indicates is technically achievable.« less
14 CFR 121.189 - Airplanes: Turbine engine powered: Takeoff limitations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Airplanes: Turbine engine powered: Takeoff... Limitations § 121.189 Airplanes: Turbine engine powered: Takeoff limitations. (a) No person operating a turbine engine powered airplane may take off that airplane at a weight greater than that listed in the...
14 CFR 121.189 - Airplanes: Turbine engine powered: Takeoff limitations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Airplanes: Turbine engine powered: Takeoff... Limitations § 121.189 Airplanes: Turbine engine powered: Takeoff limitations. (a) No person operating a turbine engine powered airplane may take off that airplane at a weight greater than that listed in the...
Code of Federal Regulations, 2011 CFR
2011-07-01
... CI ICE with a maximum engine power less than or equal to 2,237 KW (3,000 HP) and a displacement of... CI ICE with a maximum engine power greater than 2,237 KW (3,000 HP) and a displacement of less than... stationary CI ICE with a displacement of greater than or equal to 10 liters per cylinder and less than 30...
Utilising family-based designs for detecting rare variant disease associations.
Preston, Mark D; Dudbridge, Frank
2014-03-01
Rare genetic variants are thought to be important components in the causality of many diseases but discovering these associations is challenging. We demonstrate how best to use family-based designs to improve the power to detect rare variant disease associations. We show that using genetic data from enriched families (those pedigrees with greater than one affected member) increases the power and sensitivity of existing case-control rare variant tests. However, we show that transmission- (or within-family-) based tests do not benefit from this enrichment. This means that, in studies where a limited amount of genotyping is available, choosing a single case from each of many pedigrees has greater power than selecting multiple cases from fewer pedigrees. Finally, we show how a pseudo-case-control design allows a greater range of statistical tests to be applied to family data. © 2014 The Authors. Annals of Human Genetics published by John Wiley & Sons Ltd/University College London.
NASA Technical Reports Server (NTRS)
Hasse, R. A.; Hartley, C. B.
1972-01-01
Irradiation effects on three materials from the NASA Plum Brook Reactor Surveillance Program were determined. An increase of 105 K in the nil-ductility temperature for A-201 steel was observed at a fluence of approximately 3.1 x 10 to the 18th power neutrons/sq cm (neutron energy E sub n greater than 1.0 MeV). Only minor changes in the mechanical properties of 17-7 PH stainless steel were observed up to a fluence of 2 x 10 to the 21st power neutrons/sq cm (E sub n greater than 1.0 MeV). The titanium-6-percent-aluminum-4-percent-vanadium alloy maintained its notch toughness up to a fluence of 1 x 10 to the 21st power neutrons/sq cm (E sub n greater than 1.0 MeV).
NASA Technical Reports Server (NTRS)
Hill, R. W.
1994-01-01
The integration of CLIPS into HyperCard combines the intuitive, interactive user interface of the Macintosh with the powerful symbolic computation of an expert system interpreter. HyperCard is an excellent environment for quickly developing the front end of an application with buttons, dialogs, and pictures, while the CLIPS interpreter provides a powerful inference engine for complex problem solving and analysis. In order to understand the benefit of integrating HyperCard and CLIPS, consider the following: HyperCard is an information storage and retrieval system which exploits the use of the graphics and user interface capabilities of the Apple Macintosh computer. The user can easily define buttons, dialog boxes, information templates, pictures, and graphic displays through the use of the HyperCard tools and scripting language. What is generally lacking in this environment is a powerful reasoning engine for complex problem solving, and this is where CLIPS plays a role. CLIPS 5.0 (C Language Integrated Production System, v5.0) was developed at the Johnson Space Center Software Technology Branch to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 5.0 supports forward chaining rule systems, object-oriented language, and procedural programming for the construction of expert systems. It features incremental reset, seven conflict resolution stategies, truth maintenance, and user-defined external functions. Since CLIPS is implemented in the C language it is highly portable; in addition, it is embeddable as a callable routine from a program written in another language such as Ada or Fortran. By integrating HyperCard and CLIPS the advantages and uses of both packages are made available for a wide range of applications: rapid prototyping of knowledge-based expert systems, interactive simulations of physical systems and intelligent control of hypertext processes, to name a few. HyperCLIPS 2.0 is written in C-Language (54%) and Pascal (46%) for Apple Macintosh computers running Macintosh System 6.0.2 or greater. HyperCLIPS requires HyperCard 1.2 or higher and at least 2Mb of RAM are recommended to run. An executable is provided. To compile the source code, the Macintosh Programmer's Workshop (MPW) version 3.0, CLIPS 5.0 (MSC-21927), and the MPW C-Language compiler are also required. NOTE: Installing this program under Macintosh System 7 requires HyperCard v2.1. This program is distributed on a 3.5 inch Macintosh format diskette. A copy of the program documentation is included on the diskette, but may be purchased separately. HyperCLIPS was developed in 1990 and version 2.0 was released in 1991. HyperCLIPS is a copyrighted work with all copyright vested in NASA. Apple, Macintosh, MPW, and HyperCard are registered trademarks of Apple Computer, Inc.
ERIC Educational Resources Information Center
Fahy, Patrick J.
Computer-assisted learning (CAL) can be used for adults functioning at any academic or grade level. In adult basic education (ABE), CAL can promote greater learning effectiveness and faster progress, concurrent learning and experience with computer literacy skills, privacy, and motivation. Adults who face barriers (financial, geographic, personal,…
People Power--Computer Games in the Classroom
ERIC Educational Resources Information Center
Hilliard, Ivan
2014-01-01
This article presents a case study in the use of the computer simulation game "People Power," developed by the International Center on Nonviolent Conflict. The principal objective of the activity was to offer students an opportunity to understand the dynamics of social conflicts, in a format not possible in a traditional classroom…
Automated design of spacecraft systems power subsystems
NASA Technical Reports Server (NTRS)
Terrile, Richard J.; Kordon, Mark; Mandutianu, Dan; Salcedo, Jose; Wood, Eric; Hashemi, Mona
2006-01-01
This paper discusses the application of evolutionary computing to a dynamic space vehicle power subsystem resource and performance simulation in a parallel processing environment. Our objective is to demonstrate the feasibility, application and advantage of using evolutionary computation techniques for the early design search and optimization of space systems.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission. ACTION... Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses, with clarifications... Electrical and Electronic Engineers (IEEE) Standard 828-2005, ``IEEE Standard for Software Configuration...
NASA Technical Reports Server (NTRS)
Hanley, G. M.
1981-01-01
Cost and programmatic aspects of a recommended satellite power system are documented. Computer generated summaries are presented, and the detailed computer runs structured in a Work Breakdown Structure are given. The six configurations developed during the study period are summarized.
NREL Establishes New Center for Distributed Power
Establishes New Center for Distributed Power Changing Electricity Market Demands Greater , smaller-scale generation facilities. That concept, known as "distributed power," will be Energy Laboratory (NREL). The Distributed Energy Resources Center at NREL will conduct research and
Energy-efficient STDP-based learning circuits with memristor synapses
NASA Astrophysics Data System (ADS)
Wu, Xinyu; Saxena, Vishal; Campbell, Kristy A.
2014-05-01
It is now accepted that the traditional von Neumann architecture, with processor and memory separation, is ill suited to process parallel data streams which a mammalian brain can efficiently handle. Moreover, researchers now envision computing architectures which enable cognitive processing of massive amounts of data by identifying spatio-temporal relationships in real-time and solving complex pattern recognition problems. Memristor cross-point arrays, integrated with standard CMOS technology, are expected to result in massively parallel and low-power Neuromorphic computing architectures. Recently, significant progress has been made in spiking neural networks (SNN) which emulate data processing in the cortical brain. These architectures comprise of a dense network of neurons and the synapses formed between the axons and dendrites. Further, unsupervised or supervised competitive learning schemes are being investigated for global training of the network. In contrast to a software implementation, hardware realization of these networks requires massive circuit overhead for addressing and individually updating network weights. Instead, we employ bio-inspired learning rules such as the spike-timing-dependent plasticity (STDP) to efficiently update the network weights locally. To realize SNNs on a chip, we propose to use densely integrating mixed-signal integrate-andfire neurons (IFNs) and cross-point arrays of memristors in back-end-of-the-line (BEOL) of CMOS chips. Novel IFN circuits have been designed to drive memristive synapses in parallel while maintaining overall power efficiency (<1 pJ/spike/synapse), even at spike rate greater than 10 MHz. We present circuit design details and simulation results of the IFN with memristor synapses, its response to incoming spike trains and STDP learning characterization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, Alexander S.; Bryantsev, Vyacheslav S.
Uranium is used as the basic fuel for nuclear power plants, which generate significant amounts of electricity and have life cycle carbon emissions that are as low as renewable energy sources. However, the extraction of this valuable energy commodity from the ground remains controversial, mainly because of environmental and health impacts. Alternatively, seawater offers an enormous uranium resource that may be tapped at minimal environmental cost. Nowadays, amidoxime polymers are the most widely utilized sorbent materials for large-scale extraction of uranium from seawater, but they are not perfectly selective for uranyl, UO 2 2+. In particular, the competition between UOmore » 2 2+ and VO 2+/VO2+ cations poses a significant challenge to the effi-cient mining of UO 2 2+. Thus, screening and rational design of more selective ligands must be accomplished. One of the key components in achieving this goal is the establishment of computational techniques capable of assessing ligand selec-tivity trends. Here, we report an approach based on quantum chemical calculations that achieves high accuracy in repro-ducing experimental aqueous stability constants for VO 2+/VO 2+ complexes with ten different oxygen donor lig-ands. The predictive power of the developed computational protocol was demonstrated for amidoxime-type ligands, providing greater insights into new design strategies for the development of the next generation of adsorbents with high selectivity toward UO 2 2+over VO 2+/VO 2+ ions. Furthermore, the results of calculations suggest that alkylation of amidox-ime moieties present in poly(acrylamidoxime) sorbents can be a potential route to better discrimination between the uranyl and competing vanadium ions within seawater.« less
A study of workstation computational performance for real-time flight simulation
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Cleveland, Jeff I., II
1995-01-01
With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.
Thread selection according to power characteristics during context switching on compute nodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archer, Charles J.; Blocksome, Michael A.; Randles, Amanda E.
Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switchmore » if the criteria for a thread context switch are satisfied, including executing the next thread of execution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switchmore » if the criteria for a thread context switch are satisfied, including executing the next thread of execution.« less
A comparison of the postures assumed when using laptop computers and desktop computers.
Straker, L; Jones, K J; Miller, J
1997-08-01
This study evaluated the postural implications of using a laptop computer. Laptop computer screens and keyboards are joined, and are therefore unable to be adjusted separately in terms of screen height and distance, and keyboard height and distance. The posture required for their use is likely to be constrained, as little adjustment can be made for the anthropometric differences of users. In addition to the postural constraints, the study looked at discomfort levels and performance when using laptops as compared with desktops. Statistical analysis showed significantly greater neck flexion and head tilt with laptop use. The other body angles measured (trunk, shoulder, elbow, wrist, and scapula and neck protraction/retraction) showed no statistical differences. The average discomfort experienced after using the laptop for 20 min, although appearing greater than the discomfort experienced after using the desktop, was not significantly greater. When using the laptop, subjects tended to perform better than when using the desktop, though not significantly so. Possible reasons for the results are discussed and implications of the findings outlined.
Power consumption analysis DBD plasma ozone generator
NASA Astrophysics Data System (ADS)
Nur, M.; Restiwijaya, M.; Muchlisin, Z.; Susan, I. A.; Arianto, F.; Widyanto, S. A.
2016-11-01
Studies on the consumption of energy by an ozone generator with various constructions electrodes of dielectric barrier discharge plasma (DBDP) reactor has been carried out. This research was done to get the configuration of the reactor, that is capable to produce high ozone concentrations with low energy consumption. BDBP reactors were constructed by spiral- cylindrical configuration, plasma ozone was generated by high voltage AC voltage up to 25 kV and maximum frequency of 23 kHz. The reactor consists of an active electrode in the form of a spiral-shaped with variation diameter Dc, and it was made by using copper wire with diameter Dw. In this research, we variated number of loops coil windings N as well as Dc and Dw. Ozone concentrations greater when the wire's diameter Dw and the diameter of the coil windings applied was greater. We found that impedance greater will minimize the concentration of ozone, in contrary to the greater capacitance will increase the concentration of ozone. The ozone concentrations increase with augmenting of power. Maximum power is effective at DBD reactor spiral-cylinder is on the Dc = 20 mm, Dw = 1.2 mm, and the number of coil windings N = 10 loops with the resulting concentration is greater than 20 ppm and it consumes energy of 177.60 watts
Kaye, Stephen B
2009-04-01
To provide a scalar measure of refractive error, based on geometric lens power through principal, orthogonal and oblique meridians, that is not limited to the paraxial and sag height approximations. A function is derived to model sections through the principal meridian of a lens, followed by rotation of the section through orthogonal and oblique meridians. Average focal length is determined using the definition for the average of a function. Average univariate power in the principal meridian (including spherical aberration), can be computed from the average of a function over the angle of incidence as determined by the parameters of the given lens, or adequately computed from an integrated series function. Average power through orthogonal and oblique meridians, can be similarly determined using the derived formulae. The widely used computation for measuring refractive error, the spherical equivalent, introduces non-constant approximations, leading to a systematic bias. The equations proposed provide a good univariate representation of average lens power and are not subject to a systematic bias. They are particularly useful for the analysis of aggregate data, correlating with biological treatment variables and for developing analyses, which require a scalar equivalent representation of refractive power.
Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.
Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage controlmore » problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.« less
Finding New Math Identities by Computer
NASA Technical Reports Server (NTRS)
Bailey, David H.; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Recently a number of interesting new mathematical identities have been discovered by means of numerical searches on high performance computers, using some newly discovered algorithms. These include the following: pi = ((sup oo)(sub k=0))(Sigma) (1 / 16) (sup k) ((4 / 8k+1) - (2 / 8k+4) - (1 / 8k+5) - (1 / 8k+6)) and ((17 pi(exp 4)) / 360) = ((sup oo)(sub k=1))(Sigma) (1 + (1/2) + (1/3) + ... + (1/k))(exp 2) k(exp -2), zeta(3, 1, 3, 1, ..., 3, 1) = (2 pi(exp 4m) / (4m+2)! where m = number of (3,1) pairs. and where zeta(n1,n2,...,nr) = (sub k1 (is greater than) k2 (is greater than) ... (is greater than) kr)(Sigma) (1 / (k1 (sup n1) k2 (sup n2) ... kr (sup nr). The first identity is remarkable in that it permits one to compute the n-th binary or hexadecimal digit of pu directly, without computing any of the previous digits, and without using multiple precision arithmetic. Recently the ten billionth hexadecimal digit of pi was computed using this formula. The third identity has connections to quantum field theory. (The first and second of these been formally established; the third is affirmed by numerical evidence only.) The background and results of this work will be described, including an overview of the algorithms and computer techniques used in these studies.
Wellhead power production with a rotary separator turbine (RP 1196)
NASA Astrophysics Data System (ADS)
Cerini, D. J.; Record, J.
1982-12-01
A rotary-separator turbine was built with full flow capacity for a 500 F downhole temperature with a 850,000 lbm/hr production rate. The test system and results obtained in field tests are described. The preliminary design of a 10-megawatt wellhead power plant for the Roosevelt type resource is described. This system shows a specific power of .0013 kW hr per lbm, which is 20 percent greater than an optimized wellhead single stage flash plant. This is 26 percent greater than a central plant of 20 to 50 MW capacity when consideration is given to steam-gathering system pressure drop between the wells and central plant.
Volpe, Ellen M.; Hardie, Thomas L.; Cerulli, Catherine; Sommers, Marilynn S.; Morrison-Beedy, Dianne
2013-01-01
Adolescent girls with older male main partners are at greater risk for adverse sexual health outcomes than other adolescent girls. One explanation for this finding is that low relationship power occurs with partner age difference. Using a cross-sectional, descriptive design, we investigated the effect of partner age difference between an adolescent girl and her male partner on sexual risk behavior through the mediators of sexual relationship power, and physical intimate partner violence (IPV), and psychological IPV severity. We chose Blanc’s framework to guide this study as it depicts the links among demographic, social, economic, relationship, family and community characteristics, and reproductive health outcomes with gender-based relationship power and violence. Urban adolescent girls (N = 155) completed an anonymous computer-assisted self-interview survey to examine partner and relationship factors’ effect on consistent condom use. Our sample had an average age of 16.1 years with a mean partner age of 17.8 years. Partners were predominantly African American (75%), non-Hispanic (74%), and low-income (81%); 24% of participants reported consistent condom use in the last 3 months. Descriptive, correlation, and multiple mediation analyses were conducted. Partner age difference was negatively associated with consistent condom use (−.4292, p < .01); however, the indirect effects through three proposed mediators (relationship power, physical IPV, or psychological IPV severity) were not statistically significant. Further studies are needed to explore alternative rationale explaining the relationship between partner age differences and sexual risk factors within adolescent sexual relationships. Nonetheless, for clinicians and researchers, these findings underscore the heightened risk associated with partner age differences and impact of relationship dynamics on sexual risk behavior. PMID:23345572
Self-Powered Wireless Carbohydrate/Oxygen Sensitive Biodevice Based on Radio Signal Transmission
Falk, Magnus; Alcalde, Miguel; Bartlett, Philip N.; De Lacey, Antonio L.; Gorton, Lo; Gutierrez-Sanchez, Cristina; Haddad, Raoudha; Kilburn, Jeremy; Leech, Dónal; Ludwig, Roland; Magner, Edmond; Mate, Diana M.; Conghaile, Peter Ó.; Ortiz, Roberto; Pita, Marcos; Pöller, Sascha; Ruzgas, Tautgirdas; Salaj-Kosla, Urszula; Schuhmann, Wolfgang; Sebelius, Fredrik; Shao, Minling; Stoica, Leonard; Sygmund, Cristoph; Tilly, Jonas; Toscano, Miguel D.; Vivekananthan, Jeevanthi; Wright, Emma; Shleev, Sergey
2014-01-01
Here for the first time, we detail self-contained (wireless and self-powered) biodevices with wireless signal transmission. Specifically, we demonstrate the operation of self-sustained carbohydrate and oxygen sensitive biodevices, consisting of a wireless electronic unit, radio transmitter and separate sensing bioelectrodes, supplied with electrical energy from a combined multi-enzyme fuel cell generating sufficient current at required voltage to power the electronics. A carbohydrate/oxygen enzymatic fuel cell was assembled by comparing the performance of a range of different bioelectrodes followed by selection of the most suitable, stable combination. Carbohydrates (viz. lactose for the demonstration) and oxygen were also chosen as bioanalytes, being important biomarkers, to demonstrate the operation of the self-contained biosensing device, employing enzyme-modified bioelectrodes to enable the actual sensing. A wireless electronic unit, consisting of a micropotentiostat, an energy harvesting module (voltage amplifier together with a capacitor), and a radio microchip, were designed to enable the biofuel cell to be used as a power supply for managing the sensing devices and for wireless data transmission. The electronic system used required current and voltages greater than 44 µA and 0.57 V, respectively to operate; which the biofuel cell was capable of providing, when placed in a carbohydrate and oxygen containing buffer. In addition, a USB based receiver and computer software were employed for proof-of concept tests of the developed biodevices. Operation of bench-top prototypes was demonstrated in buffers containing different concentrations of the analytes, showcasing that the variation in response of both carbohydrate and oxygen biosensors could be monitored wirelessly in real-time as analyte concentrations in buffers were changed, using only an enzymatic fuel cell as a power supply. PMID:25310190
Self-powered wireless carbohydrate/oxygen sensitive biodevice based on radio signal transmission.
Falk, Magnus; Alcalde, Miguel; Bartlett, Philip N; De Lacey, Antonio L; Gorton, Lo; Gutierrez-Sanchez, Cristina; Haddad, Raoudha; Kilburn, Jeremy; Leech, Dónal; Ludwig, Roland; Magner, Edmond; Mate, Diana M; Conghaile, Peter Ó; Ortiz, Roberto; Pita, Marcos; Pöller, Sascha; Ruzgas, Tautgirdas; Salaj-Kosla, Urszula; Schuhmann, Wolfgang; Sebelius, Fredrik; Shao, Minling; Stoica, Leonard; Sygmund, Cristoph; Tilly, Jonas; Toscano, Miguel D; Vivekananthan, Jeevanthi; Wright, Emma; Shleev, Sergey
2014-01-01
Here for the first time, we detail self-contained (wireless and self-powered) biodevices with wireless signal transmission. Specifically, we demonstrate the operation of self-sustained carbohydrate and oxygen sensitive biodevices, consisting of a wireless electronic unit, radio transmitter and separate sensing bioelectrodes, supplied with electrical energy from a combined multi-enzyme fuel cell generating sufficient current at required voltage to power the electronics. A carbohydrate/oxygen enzymatic fuel cell was assembled by comparing the performance of a range of different bioelectrodes followed by selection of the most suitable, stable combination. Carbohydrates (viz. lactose for the demonstration) and oxygen were also chosen as bioanalytes, being important biomarkers, to demonstrate the operation of the self-contained biosensing device, employing enzyme-modified bioelectrodes to enable the actual sensing. A wireless electronic unit, consisting of a micropotentiostat, an energy harvesting module (voltage amplifier together with a capacitor), and a radio microchip, were designed to enable the biofuel cell to be used as a power supply for managing the sensing devices and for wireless data transmission. The electronic system used required current and voltages greater than 44 µA and 0.57 V, respectively to operate; which the biofuel cell was capable of providing, when placed in a carbohydrate and oxygen containing buffer. In addition, a USB based receiver and computer software were employed for proof-of concept tests of the developed biodevices. Operation of bench-top prototypes was demonstrated in buffers containing different concentrations of the analytes, showcasing that the variation in response of both carbohydrate and oxygen biosensors could be monitored wirelessly in real-time as analyte concentrations in buffers were changed, using only an enzymatic fuel cell as a power supply.
NASA Astrophysics Data System (ADS)
Tsuda, Kunikazu; Tano, Shunichi; Ichino, Junko
To lower power consumption has becomes a worldwide concern. It is also becoming a bigger area in Computer Systems, such as reflected by the growing use of software-as-a-service and cloud computing whose market has increased since 2000, at the same time, the number of data centers that accumulates and manages the computer has increased rapidly. Power consumption at data centers is accounts for a big share of the entire IT power usage, and is still rapidly increasing. This research focuses on the air-conditioning that occupies accounts for the biggest portion of electric power consumption by data centers, and proposes to develop a technique to lower the power consumption by applying the natural cool air and the snow for control temperature and humidity. We verify those effectiveness of this approach by the experiment. Furthermore, we also examine the extent to which energy reduction is possible when a data center is located in Hokkaido.
Further investigations of the W-test for pairwise epistasis testing.
Howey, Richard; Cordell, Heather J
2017-01-01
Background: In a recent paper, a novel W-test for pairwise epistasis testing was proposed that appeared, in computer simulations, to have higher power than competing alternatives. Application to genome-wide bipolar data detected significant epistasis between SNPs in genes of relevant biological function. Network analysis indicated that the implicated genes formed two separate interaction networks, each containing genes highly related to autism and neurodegenerative disorders. Methods: Here we investigate further the properties and performance of the W-test via theoretical evaluation, computer simulations and application to real data. Results: We demonstrate that, for common variants, the W-test is closely related to several existing tests of association allowing for interaction, including logistic regression on 8 degrees of freedom, although logistic regression can show inflated type I error for low minor allele frequencies, whereas the W-test shows good/conservative type I error control. Although in some situations the W-test can show higher power, logistic regression is not limited to tests on 8 degrees of freedom but can instead be tailored to impose greater structure on the assumed alternative hypothesis, offering a power advantage when the imposed structure matches the true structure. Conclusions: The W-test is a potentially useful method for testing for association - without necessarily implying interaction - between genetic variants disease, particularly when one or more of the genetic variants are rare. For common variants, the advantages of the W-test are less clear, and, indeed, there are situations where existing methods perform better. In our investigations, we further uncover a number of problems with the practical implementation and application of the W-test (to bipolar disorder) previously described, apparently due to inadequate use of standard data quality-control procedures. This observation leads us to urge caution in interpretation of the previously-presented results, most of which we consider are highly likely to be artefacts.
Design Trade-off Between Performance and Fault-Tolerance of Space Onboard Computers
NASA Astrophysics Data System (ADS)
Gorbunov, M. S.; Antonov, A. A.
2017-01-01
It is well known that there is a trade-off between performance and power consumption in onboard computers. The fault-tolerance is another important factor affecting performance, chip area and power consumption. Involving special SRAM cells and error-correcting codes is often too expensive with relation to the performance needed. We discuss the possibility of finding the optimal solutions for modern onboard computer for scientific apparatus focusing on multi-level cache memory design.
An efficient method for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.
Power strain imaging based on vibro-elastography techniques
NASA Astrophysics Data System (ADS)
Wen, Xu; Salcudean, S. E.
2007-03-01
This paper describes a new ultrasound elastography technique, power strain imaging, based on vibro-elastography (VE) techniques. With this method, tissue is compressed by a vibrating actuator driven by low-pass or band-pass filtered white noise, typically in the 0-20 Hz range. Tissue displacements at different spatial locations are estimated by correlation-based approaches on the raw ultrasound radio frequency signals and recorded in time sequences. The power spectra of these time sequences are computed by Fourier spectral analysis techniques. As the average of the power spectrum is proportional to the squared amplitude of the tissue motion, the square root of the average power over the range of excitation frequencies is used as a measure of the tissue displacement. Then tissue strain is determined by the least squares estimation of the gradient of the displacement field. The computation of the power spectra of the time sequences can be implemented efficiently by using Welch's periodogram method with moving windows or with accumulative windows with a forgetting factor. Compared to the transfer function estimation originally used in VE, the computation of cross spectral densities is not needed, which saves both the memory and computational times. Phantom experiments demonstrate that the proposed method produces stable and operator-independent strain images with high signal-to-noise ratio in real time. This approach has been also tested on a few patient data of the prostate region, and the results are encouraging.
Computers in Electrical Engineering Education at Virginia Polytechnic Institute.
ERIC Educational Resources Information Center
Bennett, A. Wayne
1982-01-01
Discusses use of computers in Electrical Engineering (EE) at Virginia Polytechnic Institute. Topics include: departmental background, level of computing power using large scale systems, mini and microcomputers, use of digital logic trainers and analog/hybrid computers, comments on integrating computers into EE curricula, and computer use in…
The semantic system is involved in mathematical problem solving.
Zhou, Xinlin; Li, Mengyi; Li, Leinian; Zhang, Yiyun; Cui, Jiaxin; Liu, Jie; Chen, Chuansheng
2018-02-01
Numerous studies have shown that the brain regions around bilateral intraparietal cortex are critical for number processing and arithmetical computation. However, the neural circuits for more advanced mathematics such as mathematical problem solving (with little routine arithmetical computation) remain unclear. Using functional magnetic resonance imaging (fMRI), this study (N = 24 undergraduate students) compared neural bases of mathematical problem solving (i.e., number series completion, mathematical word problem solving, and geometric problem solving) and arithmetical computation. Direct subject- and item-wise comparisons revealed that mathematical problem solving typically had greater activation than arithmetical computation in all 7 regions of the semantic system (which was based on a meta-analysis of 120 functional neuroimaging studies on semantic processing). Arithmetical computation typically had greater activation in the supplementary motor area and left precentral gyrus. The results suggest that the semantic system in the brain supports mathematical problem solving. Copyright © 2017 Elsevier Inc. All rights reserved.
Method and apparatus for improved high power impulse magnetron sputtering
Anders, Andre
2013-11-05
A high power impulse magnetron sputtering apparatus and method using a vacuum chamber with a magnetron target and a substrate positioned in the vacuum chamber. A field coil being positioned between the magnetron target and substrate, and a pulsed power supply and/or a coil bias power supply connected to the field coil. The pulsed power supply connected to the field coil, and the pulsed power supply outputting power pulse widths of greater that 100 .mu.s.
Symbolic algebra approach to the calculation of intraocular lens power following cataract surgery
NASA Astrophysics Data System (ADS)
Hjelmstad, David P.; Sayegh, Samir I.
2013-03-01
We present a symbolic approach based on matrix methods that allows for the analysis and computation of intraocular lens power following cataract surgery. We extend the basic matrix approach corresponding to paraxial optics to include astigmatism and other aberrations. The symbolic approach allows for a refined analysis of the potential sources of errors ("refractive surprises"). We demonstrate the computation of lens powers including toric lenses that correct for both defocus (myopia, hyperopia) and astigmatism. A specific implementation in Mathematica allows an elegant and powerful method for the design and analysis of these intraocular lenses.
Impact of remote sensing upon the planning, management, and development of water resources
NASA Technical Reports Server (NTRS)
Loats, H. L.; Fowler, T. R.; Frech, S. L.
1974-01-01
A survey of the principal water resource users was conducted to determine the impact of new remote data streams on hydrologic computer models. The analysis of the responses and direct contact demonstrated that: (1) the majority of water resource effort of the type suitable to remote sensing inputs is conducted by major federal water resources agencies or through federally stimulated research, (2) the federal government develops most of the hydrologic models used in this effort; and (3) federal computer power is extensive. The computers, computer power, and hydrologic models in current use were determined.
Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth
2017-09-13
Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.
NASA Astrophysics Data System (ADS)
Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth
2017-09-01
Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.
The power to resist: The relationship between power, stigma, and negative symptoms in schizophrenia
Campellone, Timothy R.; Caponigro, Janelle M.; Kring, Ann M.
2014-01-01
Stigmatizing beliefs about mental illness can be a daily struggle for people with schizophrenia. While investigations into the impact of internalizing stigma on negative symptoms have yielded mixed results, resistance to stigmatizing beliefs has received little attention. In this study, we examined the linkage between internalized stigma, stigma resistance, negative symptoms, and social power, or perceived ability to influence others during social interactions among people with schizophrenia. Further, we sought to determine whether resistance to stigma would be bolstered by social power, with greater power in relationships with other possibly buffering against motivation/pleasure negative symptoms. Fifty-one people with schizophrenia or schizoaffective disorder completed measures of social power, internalized stigma, and stigma resistance. Negative symptoms were assessed using the Clinical Assessment Interview for Negative Symptoms (CAINS). Greater social power was associated with less internalized stigma and negative symptoms as well as more stigma resistance. Further, the relationship between social power and negative symptoms was partially mediated by stigma resistance. These findings provide evidence for the role of stigma resistance as a viable target for psychosocial interventions aimed at improving motivation and social power in people with schizophrenia. PMID:24326180
The power to resist: the relationship between power, stigma, and negative symptoms in schizophrenia.
Campellone, Timothy R; Caponigro, Janelle M; Kring, Ann M
2014-02-28
Stigmatizing beliefs about mental illness can be a daily struggle for people with schizophrenia. While investigations into the impact of internalizing stigma on negative symptoms have yielded mixed results, resistance to stigmatizing beliefs has received little attention. In this study, we examined the linkage between internalized stigma, stigma resistance, negative symptoms, and social power, or perceived ability to influence others during social interactions among people with schizophrenia. Further, we sought to determine whether resistance to stigma would be bolstered by social power, with greater power in relationships with other possibly buffering against motivation/pleasure negative symptoms. Fifty-one people with schizophrenia or schizoaffective disorder completed measures of social power, internalized stigma, and stigma resistance. Negative symptoms were assessed using the Clinical Assessment Interview for Negative Symptoms (CAINS). Greater social power was associated with less internalized stigma and negative symptoms as well as more stigma resistance. Further, the relationship between social power and negative symptoms was partially mediated by stigma resistance. These findings provide evidence for the role of stigma resistance as a viable target for psychosocial interventions aimed at improving motivation and social power in people with schizophrenia. © 2013 Published by Elsevier Ireland Ltd.
Argus, Christos K; Gill, Nicholas D; Keogh, Justin W L
2012-10-01
Levels of strength and power have been used to effectively discriminate between different levels of competition; however, there is limited literature in rugby union athletes. To assess the difference in strength and power between levels of competition, 112 rugby union players, including 43 professionals, 19 semiprofessionals, 32 academy level, and 18 high school level athletes, were assessed for bench press and box squat strength, and bench throw, and jump squat power. High school athletes were not assessed for jump squat power. Raw data along with data normalized to body mass with a derived power exponent were log transformed and analyzed. With the exception of box squat and bench press strength between professional and semiprofessional athletes, higher level athletes produced greater absolute and relative strength and power outputs than did lower level athletes (4-51%; small to very large effect sizes). Lower level athletes should strive to attain greater levels of strength and power in an attempt to reach or to be physically prepared for the next level of competition. Furthermore, the ability to produce high levels of power, rather than strength, may be a better determinate of playing ability between professional and semiprofessional athletes.
Changes in Cognitive Performance Are Associated with Changes in Sleep in Older Adults With Insomnia.
Wilckens, Kristine A; Hall, Martica H; Nebes, Robert D; Monk, Timothy H; Buysse, Daniel J
2016-01-01
The present study examined sleep features associated with cognition in older adults and examined whether sleep changes following insomnia treatment were associated with cognitive improvements. Polysomnography and cognition (recall, working memory, and reasoning) were assessed before and after an insomnia intervention (Brief Behavioral Treatment of Insomnia [BBTI] or information control [IC]) in 77 older adults with insomnia. Baseline wake-after-sleep-onset (WASO) was associated with recall. Greater NREM (nonrapid eye movement) delta power and lower NREM sigma power were associated with greater working memory and reasoning. The insomnia intervention did not improve performance. However, increased absolute delta power and decreased relative sigma power were associated with improved reasoning. Findings suggest that improvements in executive function may occur with changes in NREM architecture.
Ethical Responsibility Key to Computer Security.
ERIC Educational Resources Information Center
Lynn, M. Stuart
1989-01-01
The pervasiveness of powerful computers and computer networks has raised the specter of new forms of abuse and of concomitant ethical issues. Blurred boundaries, hackers, the Computer Worm, ethical issues, and implications for academic institutions are discussed. (MLW)
Simple and Effective Algorithms: Computer-Adaptive Testing.
ERIC Educational Resources Information Center
Linacre, John Michael
Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…
Computing, Information, and Communications Technology (CICT) Program Overview
NASA Technical Reports Server (NTRS)
VanDalsem, William R.
2003-01-01
The Computing, Information and Communications Technology (CICT) Program's goal is to enable NASA's Scientific Research, Space Exploration, and Aerospace Technology Missions with greater mission assurance, for less cost, with increased science return through the development and use of advanced computing, information and communication technologies
Using Business Simulations as Authentic Assessment Tools
ERIC Educational Resources Information Center
Neely, Pat; Tucker, Jan
2012-01-01
New modalities for assessing student learning exist as a result of advances in computer technology. Conventional measurement practices have been transformed into computer based testing. Although current testing replicates assessment processes used in college classrooms, a greater opportunity exists to use computer technology to create authentic…
Characterization of real-time computers
NASA Technical Reports Server (NTRS)
Shin, K. G.; Krishna, C. M.
1984-01-01
A real-time system consists of a computer controller and controlled processes. Despite the synergistic relationship between these two components, they have been traditionally designed and analyzed independently of and separately from each other; namely, computer controllers by computer scientists/engineers and controlled processes by control scientists. As a remedy for this problem, in this report real-time computers are characterized by performance measures based on computer controller response time that are: (1) congruent to the real-time applications, (2) able to offer an objective comparison of rival computer systems, and (3) experimentally measurable/determinable. These measures, unlike others, provide the real-time computer controller with a natural link to controlled processes. In order to demonstrate their utility and power, these measures are first determined for example controlled processes on the basis of control performance functionals. They are then used for two important real-time multiprocessor design applications - the number-power tradeoff and fault-masking and synchronization.
Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi
Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...
2015-05-22
Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less
Designing contributing student pedagogies to promote students' intrinsic motivation to learn
NASA Astrophysics Data System (ADS)
Herman, Geoffrey L.
2012-12-01
In order to maximize the effectiveness of our pedagogies, we must understand how our pedagogies align with prevailing theories of cognition and motivation and design our pedagogies according to this understanding. When implementing Contributing Student Pedagogies (CSPs), students are expected to make meaningful contributions to the learning of their peers, and consequently, instructors inherently give students power and control over elements of the class. With this loss of power, instructors will become more aware that the quality of the learning environment will depend on the level of students' motivation and engagement rather than the instructor's mastery of content or techniques. Given this greater reliance on student motivation, we will discuss how motivation theories such as Self-Determination Theory (SDT) match and support the use of CSP and how CSP can be used to promote students' intrinsic motivation (IM) to learn. We conclude with examples of how we use principles of SDT to guide our design and use of CSP. We will particularly focus on how we changed the discussion sections of a large, required, sophomore-level class on digital logic and computer organization at a large, research university at relatively low-cost to the presiding class instructor.
Quantification of peripheral and central blood pressure variability using a time-frequency method.
Kouchaki, Z; Butlin, M; Qasem, A; Avolio, A P
2016-08-01
Systolic blood pressure variability (BPV) is associated with cardiovascular events. As the beat-to-beat variation of blood pressure is due to interaction of several cardiovascular control systems operating with different response times, assessment of BPV by spectral analysis using the continuous measurement of arterial pressure in the finger is used to differentiate the contribution of these systems in regulating blood pressure. However, as baroreceptors are centrally located, this study considered applying a continuous aortic pressure signal estimated noninvasively from finger pressure for assessment of systolic BPV by a time-frequency method using Short Time Fourier Transform (STFT). The average ratio of low frequency and high frequency power band (LF PB /HF PB ) was computed by time-frequency decomposition of peripheral systolic pressure (pSBP) and derived central aortic systolic blood pressure (cSBP) in 30 healthy subjects (25-62 years) as a marker of balance between cardiovascular control systems contributing in low and high frequency blood pressure variability. The results showed that the BPV assessed from finger pressure (pBPV) overestimated the BPV values compared to that assessed from central aortic pressure (cBPV) for identical cardiac cycles (P<;0.001), with the overestimation being greater at higher power.
Skylon Aerodynamics and SABRE Plumes
NASA Technical Reports Server (NTRS)
Mehta, Unmeel; Afosmis, Michael; Bowles, Jeffrey; Pandya, Shishir
2015-01-01
An independent partial assessment is provided of the technical viability of the Skylon aerospace plane concept, developed by Reaction Engines Limited (REL). The objectives are to verify REL's engineering estimates of airframe aerodynamics during powered flight and to assess the impact of Synergetic Air-Breathing Rocket Engine (SABRE) plumes on the aft fuselage. Pressure lift and drag coefficients derived from simulations conducted with Euler equations for unpowered flight compare very well with those REL computed with engineering methods. The REL coefficients for powered flight are increasingly less acceptable as the freestream Mach number is increased beyond 8.5, because the engineering estimates did not account for the increasing favorable (in terms of drag and lift coefficients) effect of underexpanded rocket engine plumes on the aft fuselage. At Mach numbers greater than 8.5, the thermal environment around the aft fuselage is a known unknown-a potential design and/or performance risk issue. The adverse effects of shock waves on the aft fuselage and plumeinduced flow separation are other potential risks. The development of an operational reusable launcher from the Skylon concept necessitates the judicious use of a combination of engineering methods, advanced methods based on required physics or analytical fidelity, test data, and independent assessments.
Detection of an anomalous pressure on a magneto-inertial-fusion load current diagnostic
Hess, Mark Harry; Hutsel, Brian Thomas; Jennings, Christopher Ashley; ...
2017-01-30
Recent Magnetized Liner Inertial Fusion experiments at the Sandia National Laboratories Z pulsed power facility have featured a PDV (Photonic Doppler Velocimetry) diagnostic in the final power feed section for measuring load current. In this paper, we report on an anomalous pressure that is detected on this PDV diagnostic very early in time during the current ramp. Early time load currents that are greater than both B-dot upstream current measurements and existing Z machine circuit models by at least 1 MA would be necessary to describe the measured early time velocity of the PDV flyer. This leads us to infermore » that the pressure producing the early time PDV flyer motion cannot be attributed to the magnetic pressure of the load current but rather to an anomalous pressure. Using the MHD code ALEGRA, we are able to compute a time-dependent anomalous pressure function, which when added to the magnetic pressure of the load current, yields simulated flyer velocities that are in excellent agreement with the PDV measurement. As a result, we also provide plausible explanations for what could be the origin of the anomalous pressure.« less
NASA Astrophysics Data System (ADS)
Bai, He; Chen, Xiangshan; Zhao, Guangyu; Xiao, Chenglei; Li, Chen; Zhong, Cheng; Chen, Yu
2017-08-01
In order to enhance the mixing process of soil contaminated by oil and water, one kind of double helical ribbon (DHR) impeller was developed. In this study, the unsteady simulation analysis of solid-liquid two-phase flow in stirring tank with DHR impeller was conducted by the the computational fluid dynamics and the multi-reference frame (MRF) method. It was found that at 0-3.0 s stage, the rate of liquid was greater than the rate of solid particles, while the power consumption was 5-6 times more than the smooth operation. The rates of the liquid and the solid particles were almost the same, and the required power was 32 KW at t > 3.0 s. The flow of the solid particles in the tank was a typical axial circle flow, and the dispersed sequence of the solid that was accumulated at the bottom of the tank was: the bottom loop region, the annular region near the wall of the groove and finally the area near axial center. The results show that the DHR impeller was suitable for the mixing of liquid-solid two-phase.
Multi-mounted X-ray cone-beam computed tomography
NASA Astrophysics Data System (ADS)
Fu, Jian; Wang, Jingzheng; Guo, Wei; Peng, Peng
2018-04-01
As a powerful nondestructive inspection technique, X-ray computed tomography (X-CT) has been widely applied to clinical diagnosis, industrial production and cutting-edge research. Imaging efficiency is currently one of the major obstacles for the applications of X-CT. In this paper, a multi-mounted three dimensional cone-beam X-CT (MM-CBCT) method is reported. It consists of a novel multi-mounted cone-beam scanning geometry and the corresponding three dimensional statistical iterative reconstruction algorithm. The scanning geometry is the most iconic design and significantly different from the current CBCT systems. Permitting the cone-beam scanning of multiple objects simultaneously, the proposed approach has the potential to achieve an imaging efficiency orders of magnitude greater than the conventional methods. Although multiple objects can be also bundled together and scanned simultaneously by the conventional CBCT methods, it will lead to the increased penetration thickness and signal crosstalk. In contrast, MM-CBCT avoids substantially these problems. This work comprises a numerical study of the method and its experimental verification using a dataset measured with a developed MM-CBCT prototype system. This technique will provide a possible solution for the CT inspection in a large scale.
NASA Technical Reports Server (NTRS)
Centrella, Joan M.
2010-01-01
The final merger of two massive black holes produces a powerful burst of gravitational radiation, emitting more energy than all the stars in the observable universe combined. The resulting gravitational waveforms will be easily detectable by the space-based LISA out to redshifts z greater than 10, revealing the masses and spins of the black holes to high precision. If the merging black holes have unequal masses, or asymmetric spins, the final black hole that forms can recoil with a velocity exceeding 1000 km/s. And, when the black holes merge in the presence of gas and magnetic fields, various types of electromagnetic signals may also be produced. For more than 30 years, scientists have tried to compute black hole mergers using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. Within the past few years, however, this situation has changed dramatically, with a series of remarkable breakthroughs. This talk will focus on new results that are revealing the dynamics and waveforms of binary black hole mergers, recoil velocities, and the possibility of accompanying electromagnetic outbursts.
NASA Astrophysics Data System (ADS)
Salberger, Olof; Korepin, Vladimir
We introduce a new model of interacting spin 1/2. It describes interactions of three nearest neighbors. The Hamiltonian can be expressed in terms of Fredkin gates. The Fredkin gate (also known as the controlled swap gate) is a computational circuit suitable for reversible computing. Our construction generalizes the model presented by Peter Shor and Ramis Movassagh to half-integer spins. Our model can be solved by means of Catalan combinatorics in the form of random walks on the upper half plane of a square lattice (Dyck walks). Each Dyck path can be mapped on a wave function of spins. The ground state is an equally weighted superposition of Dyck walks (instead of Motzkin walks). We can also express it as a matrix product state. We further construct a model of interacting spins 3/2 and greater half-integer spins. The models with higher spins require coloring of Dyck walks. We construct a SU(k) symmetric model (where k is the number of colors). The leading term of the entanglement entropy is then proportional to the square root of the length of the lattice (like in the Shor-Movassagh model). The gap closes as a high power of the length of the lattice [5, 11].
Announcing Supercomputer Summit
Wells, Jack; Bland, Buddy; Nichols, Jeff; Hack, Jim; Foertter, Fernanda; Hagen, Gaute; Maier, Thomas; Ashfaq, Moetasim; Messer, Bronson; Parete-Koon, Suzanne
2018-01-16
Summit is the next leap in leadership-class computing systems for open science. With Summit we will be able to address, with greater complexity and higher fidelity, questions concerning who we are, our place on earth, and in our universe. Summit will deliver more than five times the computational performance of Titanâs 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017. Like Titan, Summit will have a hybrid architecture, and each node will contain multiple IBM POWER9 CPUs and NVIDIA Volta GPUs all connected together with NVIDIAâs high-speed NVLink. Each node will have over half a terabyte of coherent memory (high bandwidth memory + DDR4) addressable by all CPUs and GPUs plus 800GB of non-volatile RAM that can be used as a burst buffer or as extended memory. To provide a high rate of I/O throughput, the nodes will be connected in a non-blocking fat-tree using a dual-rail Mellanox EDR InfiniBand interconnect. Upon completion, Summit will allow researchers in all fields of science unprecedented access to solving some of the worldâs most pressing challenges.
Piezohydraulic Pump Development
NASA Technical Reports Server (NTRS)
Lynch, Christopher S.
2005-01-01
Reciprocating piston piezohydraulic pumps were developed originally under the Smart Wing Phase II program (Lynch) and later under the CHAP program (CSA, Kinetic Ceramics). These pumps focused on 10 cm scale stack actuators operating below resonance and, more recently, at resonance. A survey of commercially available linear actuators indicates that obtaining power density and specific power greater than electromagnetic linear actuators requires driving the stacks at frequencies greater than 1 KHz at high fields. In the case of 10 cm scale actuators the power supply signal conditioning becomes large and heavy and the soft PZT stack actuators generate a lot of heat due to internal losses. Reciprocation frequencies can be increased and material losses significantly decreased through use of millimeter scale single crystal stack actuators. We are presently targeting the design of pumps that utilize stacks at the 1-10 mm length scale and run at reciprocating frequencies of 20kHz or greater. This offers significant advantages over current approaches including eliminating audible noise and significantly increasing the power density and specific power of the system (including electronics). The pump currently under development will comprise an LC resonant drive of a resonant crystal and head mass operating against a resonant fluid column. Each of these resonant systems are high Q and together should produce a single high Q second order system.
Comparative Implementation of High Performance Computing for Power System Dynamic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng
Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less
Application of Blind Quantum Computation to Two-Party Quantum Computation
NASA Astrophysics Data System (ADS)
Sun, Zhiyuan; Li, Qin; Yu, Fang; Chan, Wai Hong
2018-06-01
Blind quantum computation (BQC) allows a client who has only limited quantum power to achieve quantum computation with the help of a remote quantum server and still keep the client's input, output, and algorithm private. Recently, Kashefi and Wallden extended BQC to achieve two-party quantum computation which allows two parties Alice and Bob to perform a joint unitary transform upon their inputs. However, in their protocol Alice has to prepare rotated single qubits and perform Pauli operations, and Bob needs to have a powerful quantum computer. In this work, we also utilize the idea of BQC to put forward an improved two-party quantum computation protocol in which the operations of both Alice and Bob are simplified since Alice only needs to apply Pauli operations and Bob is just required to prepare and encrypt his input qubits.
Application of Blind Quantum Computation to Two-Party Quantum Computation
NASA Astrophysics Data System (ADS)
Sun, Zhiyuan; Li, Qin; Yu, Fang; Chan, Wai Hong
2018-03-01
Blind quantum computation (BQC) allows a client who has only limited quantum power to achieve quantum computation with the help of a remote quantum server and still keep the client's input, output, and algorithm private. Recently, Kashefi and Wallden extended BQC to achieve two-party quantum computation which allows two parties Alice and Bob to perform a joint unitary transform upon their inputs. However, in their protocol Alice has to prepare rotated single qubits and perform Pauli operations, and Bob needs to have a powerful quantum computer. In this work, we also utilize the idea of BQC to put forward an improved two-party quantum computation protocol in which the operations of both Alice and Bob are simplified since Alice only needs to apply Pauli operations and Bob is just required to prepare and encrypt his input qubits.
Evolutionary computing for the design search and optimization of space vehicle power subsystems
NASA Technical Reports Server (NTRS)
Kordon, M.; Klimeck, G.; Hanks, D.
2004-01-01
Evolutionary computing has proven to be a straightforward and robust approach for optimizing a wide range of difficult analysis and design problems. This paper discusses the application of these techniques to an existing space vehicle power subsystem resource and performance analysis simulation in a parallel processing environment.
Computer-Aided Engineering Tools | Water Power | NREL
energy converters that will provide a full range of simulation capabilities for single devices and arrays simulation of water power technologies on high-performance computers enables the study of complex systems and experimentation. Such simulation is critical to accelerate progress in energy programs within the U.S. Department
Manual of phosphoric acid fuel cell power plant cost model and computer program
NASA Technical Reports Server (NTRS)
Lu, C. Y.; Alkasab, K. A.
1984-01-01
Cost analysis of phosphoric acid fuel cell power plant includes two parts: a method for estimation of system capital costs, and an economic analysis which determines the levelized annual cost of operating the system used in the capital cost estimation. A FORTRAN computer has been developed for this cost analysis.