Sample records for parallel time series

  1. Series and parallel arc-fault circuit interrupter tests.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Dean; Fresquez, Armando J.; Gudgel, Bob

    2013-07-01

    While the 2011 National Electrical Codeª (NEC) only requires series arc-fault protection, some arc-fault circuit interrupter (AFCI) manufacturers are designing products to detect and mitigate both series and parallel arc-faults. Sandia National Laboratories (SNL) has extensively investigated the electrical differences of series and parallel arc-faults and has offered possible classification and mitigation solutions. As part of this effort, Sandia National Laboratories has collaborated with MidNite Solar to create and test a 24-string combiner box with an AFCI which detects, differentiates, and de-energizes series and parallel arc-faults. In the case of the MidNite AFCI prototype, series arc-faults are mitigated by openingmore » the PV strings, whereas parallel arc-faults are mitigated by shorting the array. A range of different experimental series and parallel arc-fault tests with the MidNite combiner box were performed at the Distributed Energy Technologies Laboratory (DETL) at SNL in Albuquerque, NM. In all the tests, the prototype de-energized the arc-faults in the time period required by the arc-fault circuit interrupt testing standard, UL 1699B. The experimental tests confirm series and parallel arc-faults can be successfully mitigated with a combiner box-integrated solution.« less

  2. Application of cross recurrence plot for identification of temperature fluctuations synchronization in parallel minichannels

    NASA Astrophysics Data System (ADS)

    Grzybowski, H.; Mosdorf, R.

    2016-09-01

    The temperature fluctuations occurring in flow boiling in parallel minichannels with diameter of 1 mm have been experimentally investigated and analysed. The wall temperature was recorded at each minichannel outlet by thermocouple with 0.08 mm diameter probe. The time series where recorded during dynamic two-phase flow instabilities which are accompanied by chaotic temperature fluctuations. Time series were denoised using wavelet decomposition and were analysed using cross recurrence plots (CRP) which enables the study of two time series synchronization.

  3. Time Series Discord Detection in Medical Data using a Parallel Relational Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodbridge, Diane; Rintoul, Mark Daniel; Wilson, Andrew T.

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less

  4. Time Series Discord Detection in Medical Data using a Parallel Relational Database [PowerPoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodbridge, Diane; Wilson, Andrew T.; Rintoul, Mark Daniel

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less

  5. Comparison between four dissimilar solar panel configurations

    NASA Astrophysics Data System (ADS)

    Suleiman, K.; Ali, U. A.; Yusuf, Ibrahim; Koko, A. D.; Bala, S. I.

    2017-12-01

    Several studies on photovoltaic systems focused on how it operates and energy required in operating it. Little attention is paid on its configurations, modeling of mean time to system failure, availability, cost benefit and comparisons of parallel and series-parallel designs. In this research work, four system configurations were studied. Configuration I consists of two sub-components arranged in parallel with 24 V each, configuration II consists of four sub-components arranged logically in parallel with 12 V each, configuration III consists of four sub-components arranged in series-parallel with 8 V each, and configuration IV has six sub-components with 6 V each arranged in series-parallel. Comparative analysis was made using Chapman Kolmogorov's method. The derivation for explicit expression of mean time to system failure, steady state availability and cost benefit analysis were performed, based on the comparison. Ranking method was used to determine the optimal configuration of the systems. The results of analytical and numerical solutions of system availability and mean time to system failure were determined and it was found that configuration I is the optimal configuration.

  6. Light-weight Parallel Python Tools for Earth System Modeling Workflows

    NASA Astrophysics Data System (ADS)

    Mickelson, S. A.; Paul, K.; Xu, H.; Dennis, J.; Brown, D. I.

    2015-12-01

    With the growth in computing power over the last 30 years, earth system modeling codes have become increasingly data-intensive. As an example, it is expected that the data required for the next Intergovernmental Panel on Climate Change (IPCC) Assessment Report (AR6) will increase by more than 10x to an expected 25PB per climate model. Faced with this daunting challenge, developers of the Community Earth System Model (CESM) have chosen to change the format of their data for long-term storage from time-slice to time-series, in order to reduce the required download bandwidth needed for later analysis and post-processing by climate scientists. Hence, efficient tools are required to (1) perform the transformation of the data from time-slice to time-series format and to (2) compute climatology statistics, needed for many diagnostic computations, on the resulting time-series data. To address the first of these two challenges, we have developed a parallel Python tool for converting time-slice model output to time-series format. To address the second of these challenges, we have developed a parallel Python tool to perform fast time-averaging of time-series data. These tools are designed to be light-weight, be easy to install, have very few dependencies, and can be easily inserted into the Earth system modeling workflow with negligible disruption. In this work, we present the motivation, approach, and testing results of these two light-weight parallel Python tools, as well as our plans for future research and development.

  7. The incorporation of focused history in checklist for early recognition and treatment of acute illness and injury.

    PubMed

    Jayaprakash, Namita; Ali, Rashid; Kashyap, Rahul; Bennett, Courtney; Kogan, Alexander; Gajic, Ognjen

    2016-08-31

    Diagnostic error and delay are critical impediments to the safety of critically ill patients. Checklist for early recognition and treatment of acute illness and injury (CERTAIN) has been developed as a tool that facilitates timely and error-free evaluation of critically ill patients. While the focused history is an essential part of the CERTAIN framework, it is not clear how best to choreograph this step in the process of evaluation and treatment of the acutely decompensating patient. An un-blinded crossover clinical simulation study was designed in which volunteer critical care clinicians (fellows and attendings) were randomly assigned to start with either obtaining a focused history choreographed in series (after) or in parallel to the primary survey. A focused history was obtained using the standardized SAMPLE model that is incorporated into American College of Trauma Life Support (ATLS) and Pediatric Advanced Life Support (PALS). Clinicians were asked to assess six acutely decompensating patients using pre - determined clinical scenarios (three in series choreography, three in parallel). Once the initial choreography was completed the clinician would crossover to the alternative choreography. The primary outcome was the cognitive burden assessed through the NASA task load index. Secondary outcome was time to completion of a focused history. A total of 84 simulated cases (42 in parallel, 42 in series) were tested on 14 clinicians. Both the overall cognitive load and time to completion improved with each successive practice scenario, however no difference was observed between the series versus parallel choreographies. The median (IQR) overall NASA TLX task load index for series was 39 (17 - 58) and for parallel 43 (27 - 52), p = 0.57. The median (IQR) time to completion of the tasks in series was 125 (112 - 158) seconds and in parallel 122 (108 - 158) seconds, p = 0.92. In this clinical simulation study assessing the incorporation of a focused history into the primary survey of a non-trauma critically ill patient, there was no difference in cognitive burden or time to task completion when using series choreography (after the exam) compared to parallel choreography (concurrent with the primary survey physical exam). However, with repetition of the task both overall task load and time to completion improved in each of the choreographies.

  8. High voltage pulse generator

    DOEpatents

    Fasching, George E.

    1977-03-08

    An improved high-voltage pulse generator has been provided which is especially useful in ultrasonic testing of rock core samples. An N number of capacitors are charged in parallel to V volts and at the proper instance are coupled in series to produce a high-voltage pulse of N times V volts. Rapid switching of the capacitors from the paralleled charging configuration to the series discharging configuration is accomplished by using silicon-controlled rectifiers which are chain self-triggered following the initial triggering of a first one of the rectifiers connected between the first and second of the plurality of charging capacitors. A timing and triggering circuit is provided to properly synchronize triggering pulses to the first SCR at a time when the charging voltage is not being applied to the parallel-connected charging capacitors. Alternate circuits are provided for controlling the application of the charging voltage from a charging circuit to be applied to the parallel capacitors which provides a selection of at least two different intervals in which the charging voltage is turned "off" to allow the SCR's connecting the capacitors in series to turn "off" before recharging begins. The high-voltage pulse-generating circuit including the N capacitors and corresponding SCR's which connect the capacitors in series when triggered "on" further includes diodes and series-connected inductors between the parallel-connected charging capacitors which allow sufficiently fast charging of the capacitors for a high pulse repetition rate and yet allow considerable control of the decay time of the high-voltage pulses from the pulse-generating circuit.

  9. Simulation Exploration through Immersive Parallel Planes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  10. Simulation Exploration through Immersive Parallel Planes: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  11. OceanXtremes: Scalable Anomaly Detection in Oceanographic Time-Series

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Armstrong, E. M.; Chin, T. M.; Gill, K. M.; Greguska, F. R., III; Huang, T.; Jacob, J. C.; Quach, N.

    2016-12-01

    The oceanographic community must meet the challenge to rapidly identify features and anomalies in complex and voluminous observations to further science and improve decision support. Given this data-intensive reality, we are developing an anomaly detection system, called OceanXtremes, powered by an intelligent, elastic Cloud-based analytic service backend that enables execution of domain-specific, multi-scale anomaly and feature detection algorithms across the entire archive of 15 to 30-year ocean science datasets.Our parallel analytics engine is extending the NEXUS system and exploits multiple open-source technologies: Apache Cassandra as a distributed spatial "tile" cache, Apache Spark for in-memory parallel computation, and Apache Solr for spatial search and storing pre-computed tile statistics and other metadata. OceanXtremes provides these key capabilities: Parallel generation (Spark on a compute cluster) of 15 to 30-year Ocean Climatologies (e.g. sea surface temperature or SST) in hours or overnight, using simple pixel averages or customizable Gaussian-weighted "smoothing" over latitude, longitude, and time; Parallel pre-computation, tiling, and caching of anomaly fields (daily variables minus a chosen climatology) with pre-computed tile statistics; Parallel detection (over the time-series of tiles) of anomalies or phenomena by regional area-averages exceeding a specified threshold (e.g. high SST in El Nino or SST "blob" regions), or more complex, custom data mining algorithms; Shared discovery and exploration of ocean phenomena and anomalies (facet search using Solr), along with unexpected correlations between key measured variables; Scalable execution for all capabilities on a hybrid Cloud, using our on-premise OpenStack Cloud cluster or at Amazon. The key idea is that the parallel data-mining operations will be run "near" the ocean data archives (a local "network" hop) so that we can efficiently access the thousands of files making up a three decade time-series. The presentation will cover the architecture of OceanXtremes, parallelization of the climatology computation and anomaly detection algorithms using Spark, example results for SST and other time-series, and parallel performance metrics.

  12. Enhancing sedimentation by improving flow conditions using parallel retrofit baffles.

    PubMed

    He, Cheng; Scott, Eric; Rochfort, Quintin

    2015-09-01

    In this study, placing parallel-connected baffles in the vicinity of the inlet was proposed to improve hydraulic conditions for enhancing TSS (total suspended solids) removal. The purpose of the retrofit baffle design is to divide the large and fast inflow into smaller and slower flows to increase flow uniformity. This avoids short-circuiting and increases residence time in the sedimentation basin. The newly proposed parallel-connected baffle configuration was assessed in the laboratory by comparing its TSS removal performance and the optimal flow residence time with those from the widely used series-connected baffles. The experimental results showed that the parallel-connected baffles outperformed the series-connected baffles because it could disperse flow faster and in less space by splitting the large inflow into many small branches instead of solely depending on flow internal friction over a longer flow path, as was the case under the series-connected baffles. Being able to dampen faster flow before entering the sedimentation basin is critical to reducing the possibility of disturbing any settled particles, especially under high inflow conditions. Also, for a large sedimentation basin, it may be more economically feasible to deploy the proposed parallel retrofit baffle in the vicinity of the inlet than series-connected baffles throughout the entire settling basin. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  13. The study on the parallel processing based time series correlation analysis of RBC membrane flickering in quantitative phase imaging

    NASA Astrophysics Data System (ADS)

    Lee, Minsuk; Won, Youngjae; Park, Byungjun; Lee, Seungrag

    2017-02-01

    Not only static characteristics but also dynamic characteristics of the red blood cell (RBC) contains useful information for the blood diagnosis. Quantitative phase imaging (QPI) can capture sample images with subnanometer scale depth resolution and millisecond scale temporal resolution. Various researches have been used QPI for the RBC diagnosis, and recently many researches has been developed to decrease the process time of RBC information extraction using QPI by the parallel computing algorithm, however previous studies are interested in the static parameters such as morphology of the cells or simple dynamic parameters such as root mean square (RMS) of the membrane fluctuations. Previously, we presented a practical blood test method using the time series correlation analysis of RBC membrane flickering with QPI. However, this method has shown that there is a limit to the clinical application because of the long computation time. In this study, we present an accelerated time series correlation analysis of RBC membrane flickering using the parallel computing algorithm. This method showed consistent fractal scaling exponent results of the surrounding medium and the normal RBC with our previous research.

  14. Power-balancing instantaneous optimization energy management for a novel series-parallel hybrid electric bus

    NASA Astrophysics Data System (ADS)

    Sun, Dongye; Lin, Xinyou; Qin, Datong; Deng, Tao

    2012-11-01

    Energy management(EM) is a core technique of hybrid electric bus(HEB) in order to advance fuel economy performance optimization and is unique for the corresponding configuration. There are existing algorithms of control strategy seldom take battery power management into account with international combustion engine power management. In this paper, a type of power-balancing instantaneous optimization(PBIO) energy management control strategy is proposed for a novel series-parallel hybrid electric bus. According to the characteristic of the novel series-parallel architecture, the switching boundary condition between series and parallel mode as well as the control rules of the power-balancing strategy are developed. The equivalent fuel model of battery is implemented and combined with the fuel of engine to constitute the objective function which is to minimize the fuel consumption at each sampled time and to coordinate the power distribution in real-time between the engine and battery. To validate the proposed strategy effective and reasonable, a forward model is built based on Matlab/Simulink for the simulation and the dSPACE autobox is applied to act as a controller for hardware in-the-loop integrated with bench test. Both the results of simulation and hardware-in-the-loop demonstrate that the proposed strategy not only enable to sustain the battery SOC within its operational range and keep the engine operation point locating the peak efficiency region, but also the fuel economy of series-parallel hybrid electric bus(SPHEB) dramatically advanced up to 30.73% via comparing with the prototype bus and a similar improvement for PBIO strategy relative to rule-based strategy, the reduction of fuel consumption is up to 12.38%. The proposed research ensures the algorithm of PBIO is real-time applicability, improves the efficiency of SPHEB system, as well as suite to complicated configuration perfectly.

  15. Estimating phase synchronization in dynamical systems using cellular nonlinear networks

    NASA Astrophysics Data System (ADS)

    Sowa, Robert; Chernihovskyi, Anton; Mormann, Florian; Lehnertz, Klaus

    2005-06-01

    We propose a method for estimating phase synchronization between time series using the parallel computing architecture of cellular nonlinear networks (CNN’s). Applying this method to time series of coupled nonlinear model systems and to electroencephalographic time series from epilepsy patients, we show that an accurate approximation of the mean phase coherence R —a bivariate measure for phase synchronization—can be achieved with CNN’s using polynomial-type templates.

  16. Voltage and Current Clamp Transients with Membrane Dielectric Loss

    PubMed Central

    Fitzhugh, R.; Cole, K. S.

    1973-01-01

    Transient responses of a space-clamped squid axon membrane to step changes of voltage or current are often approximated by exponential functions of time, corresponding to a series resistance and a membrane capacity of 1.0 μF/cm2. Curtis and Cole (1938, J. Gen. Physiol. 21:757) found, however, that the membrane had a constant phase angle impedance z = z1(jωτ)-α, with a mean α = 0.85. (α = 1.0 for an ideal capacitor; α < 1.0 may represent dielectric loss.) This result is supported by more recently published experimental data. For comparison with experiments, we have computed functions expressing voltage and current transients with constant phase angle capacitance, a parallel leakage conductance, and a series resistance, at nine values of α from 0.5 to 1.0. A series in powers of tα provided a good approximation for short times; one in powers of t-α, for long times; for intermediate times, a rational approximation matching both series for a finite number of terms was used. These computations may help in determining experimental series resistances and parallel leakage conductances from membrane voltage or current clamp data. PMID:4754194

  17. Single flux quantum voltage amplifiers

    NASA Astrophysics Data System (ADS)

    Golomidov, Vladimir; Kaplunenko, Vsevolod; Khabipov, Marat; Koshelets, Valery; Kaplunenko, Olga

    The novel elements of the Rapid Single Flux Quantum (RSFQ) logic family — a Quasi Digital Voltage Parallel and Series Amplifiers (QDVA) have been computer simulated, designed and experimentally investigated. The Parallel QDVA consists of six stages and provides multiplication of the input voltage with factor five. The output resistance of the QDVA is five times larger than the input so this amplifier seems to be a good matching stage between RSFQL and usual semiconductor electronics. The series QDVA provides a gain factor four and involves two doublers connected by transmission line. The proposed parallel QDVA can be integrated on the same chip with a SQUID sensor.

  18. Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Hunsberger, Randolph J

    This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less

  19. A new approach for measuring power spectra and reconstructing time series in active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Li, Yan-Rong; Wang, Jian-Min

    2018-05-01

    We provide a new approach to measure power spectra and reconstruct time series in active galactic nuclei (AGNs) based on the fact that the Fourier transform of AGN stochastic variations is a series of complex Gaussian random variables. The approach parametrizes a stochastic series in frequency domain and transforms it back to time domain to fit the observed data. The parameters and their uncertainties are derived in a Bayesian framework, which also allows us to compare the relative merits of different power spectral density models. The well-developed fast Fourier transform algorithm together with parallel computation enables an acceptable time complexity for the approach.

  20. High voltage pulse generator. [Patent application

    DOEpatents

    Fasching, G.E.

    1975-06-12

    An improved high-voltage pulse generator is described which is especially useful in ultrasonic testing of rock core samples. An N number of capacitors are charged in parallel to V volts and at the proper instance are coupled in series to produce a high-voltage pulse of N times V volts. Rapid switching of the capacitors from the paralleled charging configuration to the series discharging configuration is accomplished by using silicon-controlled rectifiers which are chain self-triggered following the initial triggering of the first rectifier connected between the first and second capacitors. A timing and triggering circuit is provided to properly synchronize triggering pulses to the first SCR at a time when the charging voltage is not being applied to the parallel-connected charging capacitors. The output voltage can be readily increased by adding additional charging networks. The circuit allows the peak level of the output to be easily varied over a wide range by using a variable autotransformer in the charging circuit.

  1. Reliability models applicable to space telescope solar array assembly system

    NASA Technical Reports Server (NTRS)

    Patil, S. A.

    1986-01-01

    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  2. SiGN-SSM: open source parallel software for estimating gene networks with state space models.

    PubMed

    Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru

    2011-04-15

    SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.

  3. Monitoring of seismic time-series with advanced parallel computational tools and complex networks

    NASA Astrophysics Data System (ADS)

    Kechaidou, M.; Sirakoulis, G. Ch.; Scordilis, E. M.

    2012-04-01

    Earthquakes have been in the focus of human and research interest for several centuries due to their catastrophic effect to the everyday life as they occur almost all over the world demonstrating a hard to be modelled unpredictable behaviour. On the other hand, their monitoring with more or less technological updated instruments has been almost continuous and thanks to this fact several mathematical models have been presented and proposed so far to describe possible connections and patterns found in the resulting seismological time-series. Especially, in Greece, one of the most seismically active territories on earth, detailed instrumental seismological data are available from the beginning of the past century providing the researchers with valuable and differential knowledge about the seismicity levels all over the country. Considering available powerful parallel computational tools, such as Cellular Automata, these data can be further successfully analysed and, most important, modelled to provide possible connections between different parameters of the under study seismic time-series. More specifically, Cellular Automata have been proven very effective to compose and model nonlinear complex systems resulting in the advancement of several corresponding models as possible analogues of earthquake fault dynamics. In this work preliminary results of modelling of the seismic time-series with the help of Cellular Automata so as to compose and develop the corresponding complex networks are presented. The proposed methodology will be able to reveal under condition hidden relations as found in the examined time-series and to distinguish the intrinsic time-series characteristics in an effort to transform the examined time-series to complex networks and graphically represent their evolvement in the time-space. Consequently, based on the presented results, the proposed model will eventually serve as a possible efficient flexible computational tool to provide a generic understanding of the possible triggering mechanisms as arrived from the adequately monitoring and modelling of the regional earthquake phenomena.

  4. Software Design Challenges in Time Series Prediction Systems Using Parallel Implementation of Artificial Neural Networks.

    PubMed

    Manikandan, Narayanan; Subha, Srinivasan

    2016-01-01

    Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used.

  5. Software Design Challenges in Time Series Prediction Systems Using Parallel Implementation of Artificial Neural Networks

    PubMed Central

    Manikandan, Narayanan; Subha, Srinivasan

    2016-01-01

    Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used. PMID:26881271

  6. An Analytical Time–Domain Expression for the Net Ripple Produced by Parallel Interleaved Converters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B.; Krein, Philip T.

    We apply modular arithmetic and Fourier series to analyze the superposition of N interleaved triangular waveforms with identical amplitudes and duty-ratios. Here, interleaving refers to the condition when a collection of periodic waveforms with identical periods are each uniformly phase-shifted across one period. The main result is a time-domain expression which provides an exact representation of the summed and interleaved triangular waveforms, where the peak amplitude and parameters of the time-periodic component are all specified in closed-form. Analysis is general and can be used to study various applications in multi-converter systems. This model is unique not only in that itmore » reveals a simple and intuitive expression for the net ripple, but its derivation via modular arithmetic and Fourier series is distinct from prior approaches. The analytical framework is experimentally validated with a system of three parallel converters under time-varying operating conditions.« less

  7. The revised solar array synthesis computer program

    NASA Technical Reports Server (NTRS)

    1970-01-01

    The Revised Solar Array Synthesis Computer Program is described. It is a general-purpose program which computes solar array output characteristics while accounting for the effects of temperature, incidence angle, charged-particle irradiation, and other degradation effects on various solar array configurations in either circular or elliptical orbits. Array configurations may consist of up to 75 solar cell panels arranged in any series-parallel combination not exceeding three series-connected panels in a parallel string and no more than 25 parallel strings in an array. Up to 100 separate solar array current-voltage characteristics, corresponding to 100 equal-time increments during the sunlight illuminated portion of an orbit or any 100 user-specified combinations of incidence angle and temperature, can be computed and printed out during one complete computer execution. Individual panel incidence angles may be computed and printed out at the user's option.

  8. Optimal resonance configuration for ultrasonic wireless power transmission to millimeter-sized biomedical implants.

    PubMed

    Miao Meng; Kiani, Mehdi

    2016-08-01

    In order to achieve efficient wireless power transmission (WPT) to biomedical implants with millimeter (mm) dimensions, ultrasonic WPT links have recently been proposed. Operating both transmitter (Tx) and receiver (Rx) ultrasonic transducers at their resonance frequency (fr) is key in improving power transmission efficiency (PTE). In this paper, different resonance configurations for Tx and Rx transducers, including series and parallel resonance, have been studied to help the designers of ultrasonic WPT links to choose the optimal resonance configuration for Tx and Rx that maximizes PTE. The geometries for disk-shaped transducers of four different sets of links, operating at series-series, series-parallel, parallel-series, and parallel-parallel resonance configurations in Tx and Rx, have been found through finite-element method (FEM) simulation tools for operation at fr of 1.4 MHz. Our simulation results suggest that operating the Tx transducer with parallel resonance increases PTE, while the resonance configuration of the mm-sized Rx transducer highly depends on the load resistance, Rl. For applications that involve large Rl in the order of tens of kΩ, a parallel resonance for a mm-sized Rx leads to higher PTE, while series resonance is preferred for Rl in the order of several kΩ and below.

  9. Displacement and deformation measurement for large structures by camera network

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Yu, Qifeng; Yang, Zhen; Xu, Zhiqiang; Zhang, Xiaohu

    2014-03-01

    A displacement and deformation measurement method for large structures by a series-parallel connection camera network is presented. By taking the dynamic monitoring of a large-scale crane in lifting operation as an example, a series-parallel connection camera network is designed, and the displacement and deformation measurement method by using this series-parallel connection camera network is studied. The movement range of the crane body is small, and that of the crane arm is large. The displacement of the crane body, the displacement of the crane arm relative to the body and the deformation of the arm are measured. Compared with a pure series or parallel connection camera network, the designed series-parallel connection camera network can be used to measure not only the movement and displacement of a large structure but also the relative movement and deformation of some interesting parts of the large structure by a relatively simple optical measurement system.

  10. Time-series analysis to study the impact of an intersection on dispersion along a street canyon.

    PubMed

    Richmond-Bryant, Jennifer; Eisner, Alfred D; Hahn, Intaek; Fortune, Christopher R; Drake-Richman, Zora E; Brixey, Laurie A; Talih, M; Wiener, Russell W; Ellenson, William D

    2009-12-01

    This paper presents data analysis from the Brooklyn Traffic Real-Time Ambient Pollutant Penetration and Environmental Dispersion (B-TRAPPED) study to assess the transport of ultrafine particulate matter (PM) across urban intersections. Experiments were performed in a street canyon perpendicular to a highway in Brooklyn, NY, USA. Real-time ultrafine PM samplers were positioned on either side of an intersection at multiple locations along a street to collect time-series number concentration data. Meteorology equipment was positioned within the street canyon and at an upstream background site to measure wind speed and direction. Time-series analysis was performed on the PM data to compute a transport velocity along the direction of the street for the cases where background winds were parallel and perpendicular to the street. The data were analyzed for sampler pairs located (1) on opposite sides of the intersection and (2) on the same block. The time-series analysis demonstrated along-street transport, including across the intersection when background winds were parallel to the street canyon and there was minimal transport and no communication across the intersection when background winds were perpendicular to the street canyon. Low but significant values of the cross-correlation function (CCF) underscore the turbulent nature of plume transport along the street canyon. The low correlations suggest that flow switching around corners or traffic-induced turbulence at the intersection may have aided dilution of the PM plume from the highway. This observation supports similar findings in the literature. Furthermore, the time-series analysis methodology applied in this study is introduced as a technique for studying spatiotemporal variation in the urban microscale environment.

  11. Impedance matching for repetitive high voltage all-solid-state Marx generator and excimer DBD UV sources

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Tong, Liqing; Liu, Kefu

    2017-06-01

    The purpose of impedance matching for a Marx generator and DBD lamp is to limit the output current of the Marx generator, provide a large discharge current at ignition, and obtain fast voltage rising/falling edges and large overshoot. In this paper, different impedance matching circuits (series inductor, parallel capacitor, and series inductor combined with parallel capacitor) are analyzed. It demonstrates that a series inductor could limit the Marx current. However, the discharge current is also limited. A parallel capacitor could provide a large discharge current, but the Marx current is also enlarged. A series inductor combined with a parallel capacitor takes full advantage of the inductor and capacitor, and avoids their shortcomings. Therefore, it is a good solution. Experimental results match the theoretical analysis well and show that both the series inductor and parallel capacitor improve the performance of the system. However, the series inductor combined with the parallel capacitor has the best performance. Compared with driving the DBD lamp with a Marx generator directly, an increase of 97.3% in radiant power and an increase of 59.3% in system efficiency are achieved using this matching circuit.

  12. VCSELs for datacom applications

    NASA Astrophysics Data System (ADS)

    Wipiejewski, Torsten; Wolf, Hans-Dieter; Korte, Lutz; Huber, Wolfgang; Kristen, Guenter; Hoyler, Charlotte; Hedrich, Harald; Kleinbub, Oliver; Albrecht, Tony; Mueller, Juergen; Orth, Andreas; Spika, Zeljko; Lutgen, Stephan; Pflaeging, Hartwig; Harrasser, Joerg; Droegemueller, Karsten; Plickert, Volker; Kuhl, Detlef; Blank, Juergen; Pietsch, Doris; Stange, Herwig; Karstensen, Holger

    1999-04-01

    The use of oxide confined VCSELs in datacom applications is demonstrated. The devices exhibit low threshold currents of approximately 3 mA and low electrical series resistance of about 50 (Omega) . The emission wavelength is in the 850 nm range. Life times of the devices are several million hours under normal operating conditions. VCSEL arrays are employed in a high performance parallel optical link called PAROLITM. This optical ink provides 12 parallel channels with a total bandwidth exceeding 12 Gbit/s. The VCSELs optimized for the parallel optical link show excellent threshold current uniformity between channels of < 50 (mu) A. The array life time drops compared to a single device, but is still larger than 1 million hours.

  13. Status of wraparound contact solar cells and arrays

    NASA Technical Reports Server (NTRS)

    Baraona, C. R.; Young, L. E.

    1978-01-01

    Solar cells with wraparound contacts provide the following advantages in array assembly: (1) eliminate the need for discretely formed, damage susceptible series tabs; (2) eliminate the n gap problem by allowing the use of uniform covers over the entire cell surface; (3) allow a higher packing factor by reducing the additional series spacing formly required for forming, and routing the series tab; and (4) allow the cell bonding to the interconnect system to be a single-side function wherein series contacts can be made at the same time parallel contracts are made.

  14. A Laboratory Exercise in Physics: Determining Single Capacitances and Series and Parallel Combinations of Capacitance.

    ERIC Educational Resources Information Center

    Schlenker, Richard M.

    This document presents a series of physics experiments which allow students to determine the value of unknown electrical capacitors. The exercises include both parallel and series connected capacitors. (SL)

  15. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  16. Measuring Multiple Resistances Using Single-Point Excitation

    NASA Technical Reports Server (NTRS)

    Hall, Dan; Davies, Frank

    2009-01-01

    In a proposed method of determining the resistances of individual DC electrical devices connected in a series or parallel string, no attempt would be made to perform direct measurements on individual devices. Instead, (1) the devices would be instrumented by connecting reactive circuit components in parallel and/or in series with the devices, as appropriate; (2) a pulse or AC voltage excitation would be applied at a single point on the string; and (3) the transient or AC steady-state current response of the string would be measured at that point only. Each reactive component(s) associated with each device would be distinct in order to associate a unique time-dependent response with that device.

  17. The effect of cell design and test criteria on the series/parallel performance of nickel cadmium cells and batteries

    NASA Technical Reports Server (NTRS)

    Halpert, G.; Webb, D. A.

    1983-01-01

    Three batteries were operated in parallel from a common bus during charge and discharge. SMM utilized NASA Standard 20AH cells and batteries, and LANDSAT-D NASA 50AH cells and batteries of a similar design. Each battery consisted of 22 series connected cells providing the nominal 28V bus. The three batteries were charged in parallel using the voltage limit/current taper mode wherein the voltage limit was temperature compensated. Discharge occurred on the demand of the spacecraft instruments and electronics. Both flights were planned for three to five year missions. The series/parallel configuration of cells and batteries for the 3-5 yr mission required a well controlled product with built-in reliability and uniformity. Examples of how component, cell and battery selection methods affect the uniformity of the series/parallel operation of the batteries both in testing and in flight are given.

  18. Parallel photonic information processing at gigabyte per second data rates using transient states

    NASA Astrophysics Data System (ADS)

    Brunner, Daniel; Soriano, Miguel C.; Mirasso, Claudio R.; Fischer, Ingo

    2013-01-01

    The increasing demands on information processing require novel computational concepts and true parallelism. Nevertheless, hardware realizations of unconventional computing approaches never exceeded a marginal existence. While the application of optics in super-computing receives reawakened interest, new concepts, partly neuro-inspired, are being considered and developed. Here we experimentally demonstrate the potential of a simple photonic architecture to process information at unprecedented data rates, implementing a learning-based approach. A semiconductor laser subject to delayed self-feedback and optical data injection is employed to solve computationally hard tasks. We demonstrate simultaneous spoken digit and speaker recognition and chaotic time-series prediction at data rates beyond 1Gbyte/s. We identify all digits with very low classification errors and perform chaotic time-series prediction with 10% error. Our approach bridges the areas of photonic information processing, cognitive and information science.

  19. COMPARISON OF PARALLEL AND SERIES HYBRID POWERTRAINS FOR TRANSIT BUS APPLICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Zhiming; Daw, C Stuart; Smith, David E

    2016-01-01

    The fuel economy and emissions of both conventional and hybrid buses equipped with emissions aftertreatment were evaluated via computational simulation for six representative city bus drive cycles. Both series and parallel configurations for the hybrid case were studied. The simulation results indicate that series hybrid buses have the greatest overall advantage in fuel economy. The series and parallel hybrid buses were predicted to produce similar CO and HC tailpipe emissions but were also predicted to have reduced NOx tailpipe emissions compared to the conventional bus in higher speed cycles. For the New York bus cycle (NYBC), which has the lowestmore » average speed among the cycles evaluated, the series bus tailpipe emissions were somewhat higher than they were for the conventional bus, while the parallel hybrid bus had significantly lower tailpipe emissions. All three bus powertrains were found to require periodic active DPF regeneration to maintain PM control. Plug-in operation of series hybrid buses appears to offer significant fuel economy benefits and is easily employed due to the relatively large battery capacity that is typical of the series hybrid configuration.« less

  20. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  1. A Bayesian nonparametric approach to dynamical noise reduction

    NASA Astrophysics Data System (ADS)

    Kaloudis, Konstantinos; Hatjispyros, Spyridon J.

    2018-06-01

    We propose a Bayesian nonparametric approach for the noise reduction of a given chaotic time series contaminated by dynamical noise, based on Markov Chain Monte Carlo methods. The underlying unknown noise process (possibly) exhibits heavy tailed behavior. We introduce the Dynamic Noise Reduction Replicator model with which we reconstruct the unknown dynamic equations and in parallel we replicate the dynamics under reduced noise level dynamical perturbations. The dynamic noise reduction procedure is demonstrated specifically in the case of polynomial maps. Simulations based on synthetic time series are presented.

  2. Load balancing for massively-parallel soft-real-time systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hailperin, M.

    1988-09-01

    Global load balancing, if practical, would allow the effective use of massively-parallel ensemble architectures for large soft-real-problems. The challenge is to replace quick global communications, which is impractical in a massively-parallel system, with statistical techniques. In this vein, the author proposes a novel approach to decentralized load balancing based on statistical time-series analysis. Each site estimates the system-wide average load using information about past loads of individual sites and attempts to equal that average. This estimation process is practical because the soft-real-time systems of interest naturally exhibit loads that are periodic, in a statistical sense akin to seasonality in econometrics.more » It is shown how this load-characterization technique can be the foundation for a load-balancing system in an architecture employing cut-through routing and an efficient multicast protocol.« less

  3. Comparison of Parallel and Series Hybrid Power Trains for Transit Bus Applications

    DOE PAGES

    Gao, Zhiming; Daw, C. Stuart; Smith, David E.; ...

    2016-08-01

    The fuel economy and emissions of conventional and hybrid buses equipped with emissions after treatment were evaluated via computational simulation for six representative city bus drive cycles. Both series and parallel configurations for the hybrid case were studied. The simulation results indicated that series hybrid buses have the greatest overall advantage in fuel economy. The series and parallel hybrid buses were predicted to produce similar carbon monoxide and hydrocarbon tailpipe emissions but were also predicted to have reduced tailpipe emissions of nitrogen oxides compared with the conventional bus in higher speed cycles. For the New York bus cycle, which hasmore » the lowest average speed among the cycles evaluated, the series bus tailpipe emissions were somewhat higher than they were for the conventional bus; the parallel hybrid bus had significantly lower tailpipe emissions. All three bus power trains were found to require periodic active diesel particulate filter regeneration to maintain control of particulate matter. Finally, plug-in operation of series hybrid buses appears to offer significant fuel economy benefits and is easily employed because of the relatively large battery capacity that is typical of the series hybrid configuration.« less

  4. Comparison of Parallel and Series Hybrid Power Trains for Transit Bus Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Zhiming; Daw, C. Stuart; Smith, David E.

    The fuel economy and emissions of conventional and hybrid buses equipped with emissions after treatment were evaluated via computational simulation for six representative city bus drive cycles. Both series and parallel configurations for the hybrid case were studied. The simulation results indicated that series hybrid buses have the greatest overall advantage in fuel economy. The series and parallel hybrid buses were predicted to produce similar carbon monoxide and hydrocarbon tailpipe emissions but were also predicted to have reduced tailpipe emissions of nitrogen oxides compared with the conventional bus in higher speed cycles. For the New York bus cycle, which hasmore » the lowest average speed among the cycles evaluated, the series bus tailpipe emissions were somewhat higher than they were for the conventional bus; the parallel hybrid bus had significantly lower tailpipe emissions. All three bus power trains were found to require periodic active diesel particulate filter regeneration to maintain control of particulate matter. Finally, plug-in operation of series hybrid buses appears to offer significant fuel economy benefits and is easily employed because of the relatively large battery capacity that is typical of the series hybrid configuration.« less

  5. Waning of "conditioned pain modulation": a novel expression of subtle pronociception in migraine.

    PubMed

    Nahman-Averbuch, Hadas; Granovsky, Yelena; Coghill, Robert C; Yarnitsky, David; Sprecher, Elliot; Weissman-Fogel, Irit

    2013-01-01

    To assess the decay of the conditioned pain modulation (CPM) response along repeated applications as a possible expression of subtle pronociception in migraine. One of the most explored mechanisms underlying the pain modulation system is "diffuse noxious inhibitory controls," which is measured psychophysically in the lab by the CPM paradigm. There are contradicting reports on CPM response in migraine, questioning whether migraineurs express pronociceptive pain modulation. Migraineurs (n = 26) and healthy controls (n = 35), all females, underwent 3 stimulation series, consisting of repeated (1) "test-stimulus" (Ts) alone that was given first followed by (2) parallel CPM application (CPM-parallel), and (3) sequential CPM application (CPM-sequential), in which the Ts is delivered during or following the conditioning-stimulus, respectively. In all series, the Ts repeated 4 times (0-3). In the CPM series, repetition "0" consisted of the Ts-alone that was followed by 3 repetitions of the Ts with a conditioning-stimulus application. Although there was no difference between migraineurs and controls for the first CPM response in each series, we found waning of CPM-parallel efficiency along the series for migraineurs (P = .005 for third vs first CPM), but not for controls. Further, greater CPM waning in the CPM-sequential series was correlated with less reported extent of pain reduction by episodic medication (r = 0.493, P = .028). Migraineurs have subtle deficits in endogenous pain modulation which requires a more challenging test protocol than the commonly used single CPM. Waning of CPM response seems to reveal this pronociceptive state. The clinical relevance of the CPM waning effect is highlighted by its association with clinical parameters of migraine. © 2013 American Headache Society.

  6. Multiphase soft switched DC/DC converter and active control technique for fuel cell ripple current elimination

    DOEpatents

    Lai, Jih-Sheng; Liu, Changrong; Ridenour, Amy

    2009-04-14

    DC/DC converter has a transformer having primary coils connected to an input side and secondary coils connected to an output side. Each primary coil connects a full-bridge circuit comprising two switches on two legs, the primary coil being connected between the switches on each leg, each full-bridge circuit being connected in parallel wherein each leg is disposed parallel to one another, and the secondary coils connected to a rectifying circuit. An outer loop control circuit that reduces ripple in a voltage reference has a first resistor connected in series with a second resistor connected in series with a first capacitor which are connected in parallel with a second capacitor. An inner loop control circuit that reduces ripple in a current reference has a third resistor connected in series with a fourth resistor connected in series with a third capacitor which are connected in parallel with a fourth capacitor.

  7. Computationally intensive econometrics using a distributed matrix-programming language.

    PubMed

    Doornik, Jurgen A; Hendry, David F; Shephard, Neil

    2002-06-15

    This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling.

  8. Evaluation of a new parallel numerical parameter optimization algorithm for a dynamical system

    NASA Astrophysics Data System (ADS)

    Duran, Ahmet; Tuncel, Mehmet

    2016-10-01

    It is important to have a scalable parallel numerical parameter optimization algorithm for a dynamical system used in financial applications where time limitation is crucial. We use Message Passing Interface parallel programming and present such a new parallel algorithm for parameter estimation. For example, we apply the algorithm to the asset flow differential equations that have been developed and analyzed since 1989 (see [3-6] and references contained therein). We achieved speed-up for some time series to run up to 512 cores (see [10]). Unlike [10], we consider more extensive financial market situations, for example, in presence of low volatility, high volatility and stock market price at a discount/premium to its net asset value with varying magnitude, in this work. Moreover, we evaluated the convergence of the model parameter vector, the nonlinear least squares error and maximum improvement factor to quantify the success of the optimization process depending on the number of initial parameter vectors.

  9. Battery tester

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poljak, M.D.

    1985-08-12

    This abstract discloses an improved battery tester for determining the acceptability of a Lithium Sulfur Dioxide (LiSO/sub 2/) storage battery at a given temperature and with one or more cells therein. The tester is generally made up of a first-comparison circuit having a series of series-interconnected components, namely a comparator, first and second flip-flops, and an AND gate. A first resistor is parallel connected to the first-comparison circuit. A second comparison circuit is also parallel connected to the first-comparison circuit and is generally made up of series-interconnected components, namely a second resistor, a capacitor, a buffer, and a second-comparator. Amore » first switch is connected to the first resistor and a second switch is parallel connected to the second-comparison circuit between the capacitor and the buffer. A logic control arrangement controls the operation of both switches, both comparators, and both flip-flops for testing a battery as to its start-up voltage and performance voltage characteristics all in a relatively short time period. In another embodiment of the tester, it is provided with an analog-to-digital converter, a memory, and a sensor arrangement for enhancing the versatility and reliability of the tester in determining the acceptability of a LiSO/sub 2/ battery.« less

  10. Six Years of Parallel Computing at NAS (1987 - 1993): What Have we Learned?

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In the fall of 1987 the age of parallelism at NAS began with the installation of a 32K processor CM-2 from Thinking Machines. In 1987 this was described as an "experiment" in parallel processing. In the six years since, NAS acquired a series of parallel machines, and conducted an active research and development effort focused on the use of highly parallel machines for applications in the computational aerosciences. In this time period parallel processing for scientific applications evolved from a fringe research topic into the one of main activities at NAS. In this presentation I will review the history of parallel computing at NAS in the context of the major progress, which has been made in the field in general. I will attempt to summarize the lessons we have learned so far, and the contributions NAS has made to the state of the art. Based on these insights I will comment on the current state of parallel computing (including the HPCC effort) and try to predict some trends for the next six years.

  11. Increased performance in the short-term water demand forecasting through the use of a parallel adaptive weighting strategy

    NASA Astrophysics Data System (ADS)

    Sardinha-Lourenço, A.; Andrade-Campos, A.; Antunes, A.; Oliveira, M. S.

    2018-03-01

    Recent research on water demand short-term forecasting has shown that models using univariate time series based on historical data are useful and can be combined with other prediction methods to reduce errors. The behavior of water demands in drinking water distribution networks focuses on their repetitive nature and, under meteorological conditions and similar consumers, allows the development of a heuristic forecast model that, in turn, combined with other autoregressive models, can provide reliable forecasts. In this study, a parallel adaptive weighting strategy of water consumption forecast for the next 24-48 h, using univariate time series of potable water consumption, is proposed. Two Portuguese potable water distribution networks are used as case studies where the only input data are the consumption of water and the national calendar. For the development of the strategy, the Autoregressive Integrated Moving Average (ARIMA) method and a short-term forecast heuristic algorithm are used. Simulations with the model showed that, when using a parallel adaptive weighting strategy, the prediction error can be reduced by 15.96% and the average error by 9.20%. This reduction is important in the control and management of water supply systems. The proposed methodology can be extended to other forecast methods, especially when it comes to the availability of multiple forecast models.

  12. Analysis of series resonant converter with series-parallel connection

    NASA Astrophysics Data System (ADS)

    Lin, Bor-Ren; Huang, Chien-Lan

    2011-02-01

    In this study, a parallel inductor-inductor-capacitor (LLC) resonant converter series-connected on the primary side and parallel-connected on the secondary side is presented for server power supply systems. Based on series resonant behaviour, the power metal-oxide-semiconductor field-effect transistors are turned on at zero voltage switching and the rectifier diodes are turned off at zero current switching. Thus, the switching losses on the power semiconductors are reduced. In the proposed converter, the primary windings of the two LLC converters are connected in series. Thus, the two converters have the same primary currents to ensure that they can supply the balance load current. On the output side, two LLC converters are connected in parallel to share the load current and to reduce the current stress on the secondary windings and the rectifier diodes. In this article, the principle of operation, steady-state analysis and design considerations of the proposed converter are provided and discussed. Experiments with a laboratory prototype with a 24 V/21 A output for server power supply were performed to verify the effectiveness of the proposed converter.

  13. Parallel synthesis of a series of potentially brain penetrant aminoalkyl benzoimidazoles.

    PubMed

    Micco, Iolanda; Nencini, Arianna; Quinn, Joanna; Bothmann, Hendrick; Ghiron, Chiara; Padova, Alessandro; Papini, Silvia

    2008-03-01

    Alpha7 agonists were identified via GOLD (CCDC) docking in the putative agonist binding site of an alpha7 homology model and a series of aminoalkyl benzoimidazoles was synthesised to obtain potentially brain penetrant drugs. The array was prepared starting from the reaction of ortho-fluoronitrobenzenes with a selection of diamines, followed by reduction of the nitro group to obtain a series of monoalkylated phenylene diamines. N,N'-Carbonyldiimidazole (CDI) mediated acylation, followed by a parallel automated work-up procedure, afforded the monoacylated phenylenediamines which were cyclised under acidic conditions. Parallel work-up and purification afforded the array products in good yields and purities with a robust parallel methodology which will be useful for other libraries. Screening for alpha7 activity revealed compounds with agonist activity for the receptor.

  14. TOMS and SBUV Data: Comparison to 3D Chemical-Transport Model Results

    NASA Technical Reports Server (NTRS)

    Stolarski, Richard S.; Douglass, Anne R.; Steenrod, Steve; Frith, Stacey

    2003-01-01

    We have updated our merged ozone data (MOD) set using the TOMS data from the new version 8 algorithm. We then analyzed these data for contributions from solar cycle, volcanoes, QBO, and halogens using a standard statistical time series model. We have recently completed a hindcast run of our 3D chemical-transport model for the same years. This model uses off-line winds from the finite-volume GCM, a full stratospheric photochemistry package, and time-varying forcing due to halogens, solar uv, and volcanic aerosols. We will report on a parallel analysis of these model results using the same statistical time series technique as used for the MOD data.

  15. Electronics Book II.

    ERIC Educational Resources Information Center

    Johnson, Dennis; And Others

    This manual, the second of three curriculum guides for an electronics course, is intended for use in a program combining vocational English as a second language (VESL) with bilingual vocational education. Ten units cover the electrical team, Ohm's law, Watt's law, series resistive circuits, parallel resistive circuits, series parallel circuits,…

  16. Domestic wastewater treatment and power generation in continuous flow air-cathode stacked microbial fuel cell: Effect of series and parallel configuration.

    PubMed

    Estrada-Arriaga, Edson Baltazar; Hernández-Romano, Jesús; García-Sánchez, Liliana; Guillén Garcés, Rosa Angélica; Bahena-Bahena, Erick Obed; Guadarrama-Pérez, Oscar; Moeller Chavez, Gabriela Eleonora

    2018-05-15

    In this study, a continuous flow stack consisting of 40 individual air-cathode MFC units was used to determine the performance of stacked MFC during domestic wastewater treatment operated with unconnected individual MFC and in series and parallel configuration. The voltages obtained from individual MFC units were of 0.08-1.1 V at open circuit voltage, while in series connection, the maximum power and current density were 2500 mW/m 2 and 500 mA/m 2 (4.9 V), respectively. In parallel connection, the maximum power and current density was 5.8 mW/m 2 and 24 mA/m 2 , respectively. When the cells were not connected to each other MFC unit, the main bacterial species found in the anode biofilms were Bacillus and Lysinibacillus. After switching from unconnected to series and parallel connections, the most abundant species in the stacked MFC were Pseudomonas aeruginosa, followed by different Bacilli classes. This study demonstrated that when the stacked MFC was switched from unconnected to series and parallel connections, the pollutants removal, performance electricity and microbial community changed significantly. Voltages drops were observed in the stacked MFC, which was mainly limited by the cathodes. These voltages loss indicated high resistances within the stacked MFC, generating a parasitic cross current. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Induction heating using induction coils in series-parallel circuits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsen, Marc Rollo; Geren, William Preston; Miller, Robert James

    A part is inductively heated by multiple, self-regulating induction coil circuits having susceptors, coupled together in parallel and in series with an AC power supply. Each of the circuits includes a tuning capacitor that tunes the circuit to resonate at the frequency of AC power supply.

  18. Stochastic nonlinear time series forecasting using time-delay reservoir computers: performance and universality.

    PubMed

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2014-07-01

    Reservoir computing is a recently introduced machine learning paradigm that has already shown excellent performances in the processing of empirical data. We study a particular kind of reservoir computers called time-delay reservoirs that are constructed out of the sampling of the solution of a time-delay differential equation and show their good performance in the forecasting of the conditional covariances associated to multivariate discrete-time nonlinear stochastic processes of VEC-GARCH type as well as in the prediction of factual daily market realized volatilities computed with intraday quotes, using as training input daily log-return series of moderate size. We tackle some problems associated to the lack of task-universality for individually operating reservoirs and propose a solution based on the use of parallel arrays of time-delay reservoirs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Parallel implementation of an adaptive and parameter-free N-body integrator

    NASA Astrophysics Data System (ADS)

    Pruett, C. David; Ingham, William H.; Herman, Ralph D.

    2011-05-01

    Previously, Pruett et al. (2003) [3] described an N-body integrator of arbitrarily high order M with an asymptotic operation count of O(MN). The algorithm's structure lends itself readily to data parallelization, which we document and demonstrate here in the integration of point-mass systems subject to Newtonian gravitation. High order is shown to benefit parallel efficiency. The resulting N-body integrator is robust, parameter-free, highly accurate, and adaptive in both time-step and order. Moreover, it exhibits linear speedup on distributed parallel processors, provided that each processor is assigned at least a handful of bodies. Program summaryProgram title: PNB.f90 Catalogue identifier: AEIK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3052 No. of bytes in distributed program, including test data, etc.: 68 600 Distribution format: tar.gz Programming language: Fortran 90 and OpenMPI Computer: All shared or distributed memory parallel processors Operating system: Unix/Linux Has the code been vectorized or parallelized?: The code has been parallelized but has not been explicitly vectorized. RAM: Dependent upon N Classification: 4.3, 4.12, 6.5 Nature of problem: High accuracy numerical evaluation of trajectories of N point masses each subject to Newtonian gravitation. Solution method: Parallel and adaptive extrapolation in time via power series of arbitrary degree. Running time: 5.1 s for the demo program supplied with the package.

  20. Parallel optimization of signal detection in active magnetospheric signal injection experiments

    NASA Astrophysics Data System (ADS)

    Gowanlock, Michael; Li, Justin D.; Rude, Cody M.; Pankratius, Victor

    2018-05-01

    Signal detection and extraction requires substantial manual parameter tuning at different stages in the processing pipeline. Time-series data depends on domain-specific signal properties, necessitating unique parameter selection for a given problem. The large potential search space makes this parameter selection process time-consuming and subject to variability. We introduce a technique to search and prune such parameter search spaces in parallel and select parameters for time series filters using breadth- and depth-first search strategies to increase the likelihood of detecting signals of interest in the field of magnetospheric physics. We focus on studying geomagnetic activity in the extremely and very low frequency ranges (ELF/VLF) using ELF/VLF transmissions from Siple Station, Antarctica, received at Québec, Canada. Our technique successfully detects amplified transmissions and achieves substantial speedup performance gains as compared to an exhaustive parameter search. We present examples where our algorithmic approach reduces the search from hundreds of seconds down to less than 1 s, with a ranked signal detection in the top 99th percentile, thus making it valuable for real-time monitoring. We also present empirical performance models quantifying the trade-off between the quality of signal recovered and the algorithm response time required for signal extraction. In the future, improved signal extraction in scenarios like the Siple experiment will enable better real-time diagnostics of conditions of the Earth's magnetosphere for monitoring space weather activity.

  1. Mountain Plains Learning Experience Guide: Radio and T.V. Repair. Course: A.C. Circuits.

    ERIC Educational Resources Information Center

    Hoggatt, P.; And Others

    One of four individualized courses included in a radio and television repair curriculum, this course focuses on alternating current relationships and computations, transformers, power supplies, series and parallel resistive-reactive circuits, and series and parallel resonance. The course is comprised of eight units: (1) Introduction to Alternating…

  2. Using machine learning to identify structural breaks in single-group interrupted time series designs.

    PubMed

    Linden, Ariel; Yarnold, Paul R

    2016-12-01

    Single-group interrupted time series analysis (ITSA) is a popular evaluation methodology in which a single unit of observation is being studied, the outcome variable is serially ordered as a time series and the intervention is expected to 'interrupt' the level and/or trend of the time series, subsequent to its introduction. Given that the internal validity of the design rests on the premise that the interruption in the time series is associated with the introduction of the treatment, treatment effects may seem less plausible if a parallel trend already exists in the time series prior to the actual intervention. Thus, sensitivity analyses should focus on detecting structural breaks in the time series before the intervention. In this paper, we introduce a machine-learning algorithm called optimal discriminant analysis (ODA) as an approach to determine if structural breaks can be identified in years prior to the initiation of the intervention, using data from California's 1988 voter-initiated Proposition 99 to reduce smoking rates. The ODA analysis indicates that numerous structural breaks occurred prior to the actual initiation of Proposition 99 in 1989, including perfect structural breaks in 1983 and 1985, thereby casting doubt on the validity of treatment effects estimated for the actual intervention when using a single-group ITSA design. Given the widespread use of ITSA for evaluating observational data and the increasing use of machine-learning techniques in traditional research, we recommend that structural break sensitivity analysis is routinely incorporated in all research using the single-group ITSA design. © 2016 John Wiley & Sons, Ltd.

  3. GPU-accelerated algorithms for many-particle continuous-time quantum walks

    NASA Astrophysics Data System (ADS)

    Piccinini, Enrico; Benedetti, Claudia; Siloi, Ilaria; Paris, Matteo G. A.; Bordone, Paolo

    2017-06-01

    Many-particle continuous-time quantum walks (CTQWs) represent a resource for several tasks in quantum technology, including quantum search algorithms and universal quantum computation. In order to design and implement CTQWs in a realistic scenario, one needs effective simulation tools for Hamiltonians that take into account static noise and fluctuations in the lattice, i.e. Hamiltonians containing stochastic terms. To this aim, we suggest a parallel algorithm based on the Taylor series expansion of the evolution operator, and compare its performances with those of algorithms based on the exact diagonalization of the Hamiltonian or a 4th order Runge-Kutta integration. We prove that both Taylor-series expansion and Runge-Kutta algorithms are reliable and have a low computational cost, the Taylor-series expansion showing the additional advantage of a memory allocation not depending on the precision of calculation. Both algorithms are also highly parallelizable within the SIMT paradigm, and are thus suitable for GPGPU computing. In turn, we have benchmarked 4 NVIDIA GPUs and 3 quad-core Intel CPUs for a 2-particle system over lattices of increasing dimension, showing that the speedup provided by GPU computing, with respect to the OPENMP parallelization, lies in the range between 8x and (more than) 20x, depending on the frequency of post-processing. GPU-accelerated codes thus allow one to overcome concerns about the execution time, and make it possible simulations with many interacting particles on large lattices, with the only limit of the memory available on the device.

  4. Optical signal processing using photonic reservoir computing

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Dehyadegari, Louiza

    2014-10-01

    As a new approach to recognition and classification problems, photonic reservoir computing has such advantages as parallel information processing, power efficient and high speed. In this paper, a photonic structure has been proposed for reservoir computing which is investigated using a simple, yet, non-partial noisy time series prediction task. This study includes the application of a suitable topology with self-feedbacks in a network of SOA's - which lends the system a strong memory - and leads to adjusting adequate parameters resulting in perfect recognition accuracy (100%) for noise-free time series, which shows a 3% improvement over previous results. For the classification of noisy time series, the rate of accuracy showed a 4% increase and amounted to 96%. Furthermore, an analytical approach was suggested to solve rate equations which led to a substantial decrease in the simulation time, which is an important parameter in classification of large signals such as speech recognition, and better results came up compared with previous works.

  5. Data assimilation using a GPU accelerated path integral Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Quinn, John C.; Abarbanel, Henry D. I.

    2011-09-01

    The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a graphics processing unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases.

  6. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.

    PubMed

    Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L

    2016-01-01

    Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.

  7. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting

    PubMed Central

    Alomar, Miquel L.; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L.

    2016-01-01

    Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876

  8. Tracking Connections: An Exercise about Series and Parallel Resistances

    ERIC Educational Resources Information Center

    Jankovic, Srdjan

    2010-01-01

    Unlike many other topics in basic physics, series and parallel resistances are rarely noticed in the real life of an ordinary individual, making it difficult to design a laboratory activity that can simulate something familiar. The activities described here entail minimal costs and are based on a puzzle-like game of tracking wire connections. A…

  9. Operating characteristics of superconducting fault current limiter using 24kV vacuum interrupter driven by electromagnetic repulsion switch

    NASA Astrophysics Data System (ADS)

    Endo, M.; Hori, T.; Koyama, K.; Yamaguchi, I.; Arai, K.; Kaiho, K.; Yanabu, S.

    2008-02-01

    Using a high temperature superconductor, we constructed and tested a model Superconducting Fault Current Limiter (SFCL). SFCL which has a vacuum interrupter with electromagnetic repulsion mechanism. We set out to construct high voltage class SFCL. We produced the electromagnetic repulsion switch equipped with a 24kV vacuum interrupter(VI). There are problems that opening speed becomes late. Because the larger vacuum interrupter the heavier weight of its contact. For this reason, the current which flows in a superconductor may be unable to be interrupted within a half cycles of current. In order to solve this problem, it is necessary to change the design of the coil connected in parallel and to strengthen the electromagnetic repulsion force at the time of opening the vacuum interrupter. Then, the design of the coil was changed, and in order to examine whether the problem is solvable, the current limiting test was conducted. We examined current limiting test using 4 series and 2 parallel-connected YBCO thin films. We used 12-centimeter-long YBCO thin film. The parallel resistance (0.1Ω) is connected with each YBCO thin film. As a result, we succeed in interrupting the current of superconductor within a half cycle of it. Furthermore, series and parallel-connected YBCO thin film could limit without failure.

  10. Special purpose parallel computer architecture for real-time control and simulation in robotic applications

    NASA Technical Reports Server (NTRS)

    Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)

    1993-01-01

    This is a real-time robotic controller and simulator which is a MIMD-SIMD parallel architecture for interfacing with an external host computer and providing a high degree of parallelism in computations for robotic control and simulation. It includes a host processor for receiving instructions from the external host computer and for transmitting answers to the external host computer. There are a plurality of SIMD microprocessors, each SIMD processor being a SIMD parallel processor capable of exploiting fine grain parallelism and further being able to operate asynchronously to form a MIMD architecture. Each SIMD processor comprises a SIMD architecture capable of performing two matrix-vector operations in parallel while fully exploiting parallelism in each operation. There is a system bus connecting the host processor to the plurality of SIMD microprocessors and a common clock providing a continuous sequence of clock pulses. There is also a ring structure interconnecting the plurality of SIMD microprocessors and connected to the clock for providing the clock pulses to the SIMD microprocessors and for providing a path for the flow of data and instructions between the SIMD microprocessors. The host processor includes logic for controlling the RRCS by interpreting instructions sent by the external host computer, decomposing the instructions into a series of computations to be performed by the SIMD microprocessors, using the system bus to distribute associated data among the SIMD microprocessors, and initiating activity of the SIMD microprocessors to perform the computations on the data by procedure call.

  11. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study.

    PubMed

    Klingner, Carsten M; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI.

  12. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study

    PubMed Central

    Klingner, Carsten M.; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W.

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI. PMID:28066197

  13. The role of group index engineering in series-connected photonic crystal microcavities for high density sensor microarrays

    PubMed Central

    Zou, Yi; Chakravarty, Swapnajit; Zhu, Liang; Chen, Ray T.

    2014-01-01

    We experimentally demonstrate an efficient and robust method for series connection of photonic crystal microcavities that are coupled to photonic crystal waveguides in the slow light transmission regime. We demonstrate that group index taper engineering provides excellent optical impedance matching between the input and output strip waveguides and the photonic crystal waveguide, a nearly flat transmission over the entire guided mode spectrum and clear multi-resonance peaks corresponding to individual microcavities that are connected in series. Series connected photonic crystal microcavities are further multiplexed in parallel using cascaded multimode interference power splitters to generate a high density silicon nanophotonic microarray comprising 64 photonic crystal microcavity sensors, all of which are interrogated simultaneously at the same instant of time. PMID:25316921

  14. Sentinel-1 data massive processing for large scale DInSAR analyses within Cloud Computing environments through the P-SBAS approach

    NASA Astrophysics Data System (ADS)

    Lanari, Riccardo; Bonano, Manuela; Buonanno, Sabatino; Casu, Francesco; De Luca, Claudio; Fusco, Adele; Manunta, Michele; Manzo, Mariarosaria; Pepe, Antonio; Zinno, Ivana

    2017-04-01

    The SENTINEL-1 (S1) mission is designed to provide operational capability for continuous mapping of the Earth thanks to its two polar-orbiting satellites (SENTINEL-1A and B) performing C-band synthetic aperture radar (SAR) imaging. It is, indeed, characterized by enhanced revisit frequency, coverage and reliability for operational services and applications requiring long SAR data time series. Moreover, SENTINEL-1 is specifically oriented to interferometry applications with stringent requirements based on attitude and orbit accuracy and it is intrinsically characterized by small spatial and temporal baselines. Consequently, SENTINEL-1 data are particularly suitable to be exploited through advanced interferometric techniques such as the well-known DInSAR algorithm referred to as Small BAseline Subset (SBAS), which allows the generation of deformation time series and displacement velocity maps. In this work we present an advanced interferometric processing chain, based on the Parallel SBAS (P-SBAS) approach, for the massive processing of S1 Interferometric Wide Swath (IWS) data aimed at generating deformation time series in efficient, automatic and systematic way. Such a DInSAR chain is designed to exploit distributed computing infrastructures, and more specifically Cloud Computing environments, to properly deal with the storage and the processing of huge S1 datasets. In particular, since S1 IWS data are acquired with the innovative Terrain Observation with Progressive Scans (TOPS) mode, we could benefit from the structure of S1 data, which are composed by bursts that can be considered as separate acquisitions. Indeed, the processing is intrinsically parallelizable with respect to such independent input data and therefore we basically exploited this coarse granularity parallelization strategy in the majority of the steps of the SBAS processing chain. Moreover, we also implemented more sophisticated parallelization approaches, exploiting both multi-node and multi-core programming techniques. Currently, Cloud Computing environments make available large collections of computing resources and storage that can be effectively exploited through the presented S1 P-SBAS processing chain to carry out interferometric analyses at a very large scale, in reduced time. This allows us to deal also with the problems connected to the use of S1 P-SBAS chain in operational contexts, related to hazard monitoring and risk prevention and mitigation, where handling large amounts of data represents a challenging task. As a significant experimental result we performed a large spatial scale SBAS analysis relevant to the Central and Southern Italy by exploiting the Amazon Web Services Cloud Computing platform. In particular, we processed in parallel 300 S1 acquisitions covering the Italian peninsula from Lazio to Sicily through the presented S1 P-SBAS processing chain, generating 710 interferograms, thus finally obtaining the displacement time series of the whole processed area. This work has been partially supported by the CNR-DPC agreement, the H2020 EPOS-IP project (GA 676564) and the ESA GEP project.

  15. Night-time lights: A global, long term look at links to socio-economic trends

    PubMed Central

    Zavala-Araiza, Daniel; Wagner, Gernot

    2017-01-01

    We use a parallelized spatial analytics platform to process the twenty-one year totality of the longest-running time series of night-time lights data—the Defense Meteorological Satellite Program (DMSP) dataset—surpassing the narrower scope of prior studies to assess changes in area lit of countries globally. Doing so allows a retrospective look at the global, long-term relationships between night-time lights and a series of socio-economic indicators. We find the strongest correlations with electricity consumption, CO2 emissions, and GDP, followed by population, CH4 emissions, N2O emissions, poverty (inverse) and F-gas emissions. Relating area lit to electricity consumption shows that while a basic linear model provides a good statistical fit, regional and temporal trends are found to have a significant impact. PMID:28346500

  16. InSAR Deformation Time Series Processed On-Demand in the Cloud

    NASA Astrophysics Data System (ADS)

    Horn, W. B.; Weeden, R.; Dimarchi, H.; Arko, S. A.; Hogenson, K.

    2017-12-01

    During this past year, ASF has developed a cloud-based on-demand processing system known as HyP3 (http://hyp3.asf.alaska.edu/), the Hybrid Pluggable Processing Pipeline, for Synthetic Aperture Radar (SAR) data. The system makes it easy for a user who doesn't have the time or inclination to install and use complex SAR processing software to leverage SAR data in their research or operations. One such processing algorithm is generation of a deformation time series product, which is a series of images representing ground displacements over time, which can be computed using a time series of interferometric SAR (InSAR) products. The set of software tools necessary to generate this useful product are difficult to install, configure, and use. Moreover, for a long time series with many images, the processing of just the interferograms can take days. Principally built by three undergraduate students at the ASF DAAC, the deformation time series processing relies the new Amazon Batch service, which enables processing of jobs with complex interconnected dependencies in a straightforward and efficient manner. In the case of generating a deformation time series product from a stack of single-look complex SAR images, the system uses Batch to serialize the up-front processing, interferogram generation, optional tropospheric correction, and deformation time series generation. The most time consuming portion is the interferogram generation, because even for a fairly small stack of images many interferograms need to be processed. By using AWS Batch, the interferograms are all generated in parallel; the entire process completes in hours rather than days. Additionally, the individual interferograms are saved in Amazon's cloud storage, so that when new data is acquired in the stack, an updated time series product can be generated with minimal addiitonal processing. This presentation will focus on the development techniques and enabling technologies that were used in developing the time series processing in the ASF HyP3 system. Data and process flow from job submission through to order completion will be shown, highlighting the benefits of the cloud for each step.

  17. Using the Statecharts paradigm for simulation of patient flow in surgical care.

    PubMed

    Sobolev, Boris; Harel, David; Vasilakis, Christos; Levy, Adrian

    2008-03-01

    Computer simulation of patient flow has been used extensively to assess the impacts of changes in the management of surgical care. However, little research is available on the utility of existing modeling techniques. The purpose of this paper is to examine the capacity of Statecharts, a system of graphical specification, for constructing a discrete-event simulation model of the perioperative process. The Statecharts specification paradigm was originally developed for representing reactive systems by extending the formalism of finite-state machines through notions of hierarchy, parallelism, and event broadcasting. Hierarchy permits subordination between states so that one state may contain other states. Parallelism permits more than one state to be active at any given time. Broadcasting of events allows one state to detect changes in another state. In the context of the peri-operative process, hierarchy provides the means to describe steps within activities and to cluster related activities, parallelism provides the means to specify concurrent activities, and event broadcasting provides the means to trigger a series of actions in one activity according to transitions that occur in another activity. Combined with hierarchy and parallelism, event broadcasting offers a convenient way to describe the interaction of concurrent activities. We applied the Statecharts formalism to describe the progress of individual patients through surgical care as a series of asynchronous updates in patient records generated in reaction to events produced by parallel finite-state machines representing concurrent clinical and managerial activities. We conclude that Statecharts capture successfully the behavioral aspects of surgical care delivery by specifying permissible chronology of events, conditions, and actions.

  18. Electricity generation and microbial community in response to short-term changes in stack connection of self-stacked submersible microbial fuel cell powered by glycerol.

    PubMed

    Zhao, Nannan; Angelidaki, Irini; Zhang, Yifeng

    2017-02-01

    Stack connection (i.e., in series or parallel) of microbial fuel cell (MFC) is an efficient way to boost the power output for practical application. However, there is little information available on short-term changes in stack connection and its effect on the electricity generation and microbial community. In this study, a self-stacked submersible microbial fuel cell (SSMFC) powered by glycerol was tested to elucidate this important issue. In series connection, the maximum voltage output reached to 1.15 V, while maximum current density was 5.73 mA in parallel. In both connections, the maximum power density increased with the initial glycerol concentration. However, the glycerol degradation was even faster in parallel connection. When the SSMFC was shifted from series to parallel connection, the reactor reached to a stable power output without any lag phase. Meanwhile, the anodic microbial community compositions were nearly stable. Comparatively, after changing parallel to series connection, there was a lag period for the system to get stable again and the microbial community compositions became greatly different. This study is the first attempt to elucidate the influence of short-term changes in connection on the performance of MFC stack, and could provide insight to the practical utilization of MFC. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Combining Different Conceptual Change Methods within Four-Step Constructivist Teaching Model: A Sample Teaching of Series and Parallel Circuits

    ERIC Educational Resources Information Center

    Ipek, Hava; Calik, Muammer

    2008-01-01

    Based on students' alternative conceptions of the topics "electric circuits", "electric charge flows within an electric circuit", "how the brightness of bulbs and the resistance changes in series and parallel circuits", the current study aims to present a combination of different conceptual change methods within a four-step constructivist teaching…

  20. Including trait-based early warning signals helps predict population collapse

    PubMed Central

    Clements, Christopher F.; Ozgul, Arpat

    2016-01-01

    Foreseeing population collapse is an on-going target in ecology, and this has led to the development of early warning signals based on expected changes in leading indicators before a bifurcation. Such signals have been sought for in abundance time-series data on a population of interest, with varying degrees of success. Here we move beyond these established methods by including parallel time-series data of abundance and fitness-related trait dynamics. Using data from a microcosm experiment, we show that including information on the dynamics of phenotypic traits such as body size into composite early warning indices can produce more accurate inferences of whether a population is approaching a critical transition than using abundance time-series alone. By including fitness-related trait information alongside traditional abundance-based early warning signals in a single metric of risk, our generalizable approach provides a powerful new way to assess what populations may be on the verge of collapse. PMID:27009968

  1. SciSpark: In-Memory Map-Reduce for Earth Science Algorithms

    NASA Astrophysics Data System (ADS)

    Ramirez, P.; Wilson, B. D.; Whitehall, K. D.; Palamuttam, R. S.; Mattmann, C. A.; Shah, S.; Goodman, A.; Burke, W.

    2016-12-01

    We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark under a NASA AIST grant (PI Mattmann). Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based Apache Hadoop by 100x in memory and by 10x on disk. SciSpark extends Spark to support Earth Science use in three ways: Efficient ingest of N-dimensional geo-located arrays (physical variables) from netCDF3/4, HDF4/5, and/or OPeNDAP URLS; Array operations for dense arrays in scala and Java using the ND4S/ND4J or Breeze libraries; Operations to "split" datasets across a Spark cluster by time or space or both. For example, a decade-long time-series of geo-variables can be split across time to enable parallel "speedups" of analysis by day, month, or season. Similarly, very high-resolution climate grids can be partitioned into spatial tiles for parallel operations across rows, columns, or blocks. In addition, using Spark's gateway into python, PySpark, one can utilize the entire ecosystem of numpy, scipy, etc. Finally, SciSpark Notebooks provide a modern eNotebook technology in which scala, python, or spark-sql codes are entered into cells in the Notebook and executed on the cluster, with results, plots, or graph visualizations displayed in "live widgets". We have exercised SciSpark by implementing three complex Use Cases: discovery and evolution of Mesoscale Convective Complexes (MCCs) in storms, yielding a graph of connected components; PDF Clustering of atmospheric state using parallel K-Means; and statistical "rollups" of geo-variables or model-to-obs. differences (i.e. mean, stddev, skewness, & kurtosis) by day, month, season, year, and multi-year. Geo-variables are ingested and split across the cluster using methods on the sciSparkContext object including netCDFVariables() for spatial decomposition and wholeNetCDFVariables() for time-series. The presentation will cover the architecture of SciSpark, the design of the scientific RDD (sRDD) data structures for N-dim. arrays, results from the three science Use Cases, example Notebooks, lessons learned from the algorithm implementations, and parallel performance metrics.

  2. Octree-based, GPU implementation of a continuous cellular automaton for the simulation of complex, evolving surfaces

    NASA Astrophysics Data System (ADS)

    Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.

    2011-03-01

    Presently, dynamic surface-based models are required to contain increasingly larger numbers of points and to propagate them over longer time periods. For large numbers of surface points, the octree data structure can be used as a balance between low memory occupation and relatively rapid access to the stored data. For evolution rules that depend on neighborhood states, extended simulation periods can be obtained by using simplified atomistic propagation models, such as the Cellular Automata (CA). This method, however, has an intrinsic parallel updating nature and the corresponding simulations are highly inefficient when performed on classical Central Processing Units (CPUs), which are designed for the sequential execution of tasks. In this paper, a series of guidelines is presented for the efficient adaptation of octree-based, CA simulations of complex, evolving surfaces into massively parallel computing hardware. A Graphics Processing Unit (GPU) is used as a cost-efficient example of the parallel architectures. For the actual simulations, we consider the surface propagation during anisotropic wet chemical etching of silicon as a computationally challenging process with a wide-spread use in microengineering applications. A continuous CA model that is intrinsically parallel in nature is used for the time evolution. Our study strongly indicates that parallel computations of dynamically evolving surfaces simulated using CA methods are significantly benefited by the incorporation of octrees as support data structures, substantially decreasing the overall computational time and memory usage.

  3. Detecting Forest Disturbance Events from MODIS and Landsat Time Series for the Conterminous United States

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Ganguly, S.; Saatchi, S. S.; Hagen, S. C.; Harris, N.; Yu, Y.; Nemani, R. R.

    2013-12-01

    Spatial and temporal patterns of forest disturbance and regrowth processes are key for understanding aboveground terrestrial vegetation biomass and carbon stocks at regional-to-continental scales. The NASA Carbon Monitoring System (CMS) program seeks key input datasets, especially information related to impacts due to natural/man-made disturbances in forested landscapes of Conterminous U.S. (CONUS), that would reduce uncertainties in current carbon stock estimation and emission models. This study provides a end-to-end forest disturbance detection framework based on pixel time series analysis from MODIS (Moderate Resolution Imaging Spectroradiometer) and Landsat surface spectral reflectance data. We applied the BFAST (Breaks for Additive Seasonal and Trend) algorithm to the Normalized Difference Vegetation Index (NDVI) data for the time period from 2000 to 2011. A harmonic seasonal model was implemented in BFAST to decompose the time series to seasonal and interannual trend components in order to detect abrupt changes in magnitude and direction of these components. To apply the BFAST for whole CONUS, we built a parallel computing setup for processing massive time-series data using the high performance computing facility of the NASA Earth Exchange (NEX). In the implementation process, we extracted the dominant deforestation events from the magnitude of abrupt changes in both seasonal and interannual components, and estimated dates for corresponding deforestation events. We estimated the recovery rate for deforested regions through regression models developed between NDVI values and time since disturbance for all pixels. A similar implementation of the BFAST algorithm was performed over selected Landsat scenes (all Landsat cloud free data was used to generate NDVI from atmospherically corrected spectral reflectances) to demonstrate the spatial coherence in retrieval layers between MODIS and Landsat. In future, the application of this largely parallel disturbance detection setup will facilitate large scale processing and wall-to-wall mapping of forest disturbance and regrowth of Landsat data for the whole of CONUS. This exercise will aid in improving the present capabilities of the NASA CMS effort in reducing uncertainties in national-level estimates of biomass and carbon stocks.

  4. Analysis of DE-1 PWI electric field data

    NASA Technical Reports Server (NTRS)

    Weimer, Daniel

    1994-01-01

    The measurement of low frequency electric field oscillations may be accomplished with the Plasma Wave Instrument (PWI) on DE 1. Oscillations at a frequency around 1 Hz are below the range of the conventional plasma wave receivers, but they can be detected by using a special processing of the quasi-static electric field data. With this processing it is also possible to determine if the electric field oscillations are predominately parallel or perpendicular to the ambient magnetic field. The quasi-static electric field in the DE 1 spin/orbit plane is measured with a long-wire 'double probe'. This antenna is perpendicular to the satellite spin axis, which in turn is approximately perpendicular to the geomagnetic field in the polar magnetosphere. The electric field data are digitally sampled at a frequency of 16 Hz. The measured electric field signal, which has had phase reversals introduced by the rotating antenna, is multiplied by the sine of the rotation angle between the antenna and the magnetic field. This is called the 'perpendicular' signal. The measured time series is also multiplied with the cosine of the angle to produce a separate 'parallel' signal. These two separate time series are then processed to determine the frequency power spectrum.

  5. COMPACT CASCADE IMPACTS

    DOEpatents

    Lippmann, M.

    1964-04-01

    A cascade particle impactor capable of collecting particles and distributing them according to size is described. In addition the device is capable of collecting on a pair of slides a series of different samples so that less time is required for the changing of slides. Other features of the device are its compactness and its ruggedness making it useful under field conditions. Essentially the unit consists of a main body with a series of transverse jets discharging on a pair of parallel, spaced glass plates. The plates are capable of being moved incremental in steps to obtain the multiple samples. (AEC)

  6. Adding Resistances and Capacitances in Introductory Electricity

    NASA Astrophysics Data System (ADS)

    Efthimiou, C. J.; Llewellyn, R. A.

    2005-09-01

    All introductory physics textbooks, with or without calculus, cover the addition of both resistances and capacitances in series and in parallel as discrete summations. However, none includes problems that involve continuous versions of resistors in parallel or capacitors in series. This paper introduces a method for solving the continuous problems that is logical, straightforward, and within the mathematical preparation of students at the introductory level.

  7. Future projects in asteroseismology: the unique role of Antarctica

    NASA Astrophysics Data System (ADS)

    Mosser, B.; Siamois Team

    Asteroseismology requires observables registered in stringent conditions: very high sensitivity, uninterrupted time series, long duration. These specifications then allow to study the details of the stellar interior structure. Space-borne and ground-based asteroseismic projects are presented and compared. With CoRoT as a precursor, then Kepler and maybe Plato, the roadmap in space appears to be precisely designed. In parallel, ground-based projects are necessary to provide different and unique information on bright stars with Doppler measurements. Dome C appears to be the ideal place for ground-based asteroseismic observations. The unequalled weather conditions yield a duty cycle comparable to space. Long time series (up to 3 months) will be possible, thanks to the long duration of the polar night.

  8. Development and optimization of water treatment reactors using TiO2-modified polymer beads with a refractive index identical to that of water

    NASA Astrophysics Data System (ADS)

    Myoga, Arata; Iwashita, Ryutaro; Unno, Noriyuki; Satake, Shin-ichi; Taniguchi, Jun; Yuki, Kazuhisa; Seki, Yohji

    2018-03-01

    Various water purification reactors were constructed using beads of TiO2-coated MEXFLON, which is a fluoropolymer exhibiting a refractive index identical to that of water. The performance of these reactors was evaluated in a recirculation experiment utilizing an aqueous solution of methylene blue. Reactor pipes (length = 150 mm, internal diameter = 10 mm) were made of a fluorinated ethylene polymer with a refractive index of 1.338 and contained 206-bead clusters. A UV lamp was used to irradiate eight reactor pipes surrounding it. The above-mentioned eight bead-packed pipes were connected both in series and in parallel, and the performances of these two reactor types were compared. A pseudo-first-order rate constant of 0.70 h- 1 was obtained for the series connection, whereas the corresponding value for the parallel connection was 1.5 times smaller, confirming the effectiveness of increasing the reaction surface by employing a larger number of beads.

  9. Development and optimization of water treatment reactors using TiO2-modified polymer beads with a refractive index identical to that of water

    NASA Astrophysics Data System (ADS)

    Myoga, Arata; Iwashita, Ryutaro; Unno, Noriyuki; Satake, Shin-ichi; Taniguchi, Jun; Yuki, Kazuhisa; Seki, Yohji

    2018-06-01

    Various water purification reactors were constructed using beads of TiO2-coated MEXFLON, which is a fluoropolymer exhibiting a refractive index identical to that of water. The performance of these reactors was evaluated in a recirculation experiment utilizing an aqueous solution of methylene blue. Reactor pipes (length = 150 mm, internal diameter = 10 mm) were made of a fluorinated ethylene polymer with a refractive index of 1.338 and contained 206-bead clusters. A UV lamp was used to irradiate eight reactor pipes surrounding it. The above-mentioned eight bead-packed pipes were connected both in series and in parallel, and the performances of these two reactor types were compared. A pseudo-first-order rate constant of 0.70 h- 1 was obtained for the series connection, whereas the corresponding value for the parallel connection was 1.5 times smaller, confirming the effectiveness of increasing the reaction surface by employing a larger number of beads.

  10. Performance Comparison of Big Data Analytics With NEXUS and Giovanni

    NASA Astrophysics Data System (ADS)

    Jacob, J. C.; Huang, T.; Lynnes, C.

    2016-12-01

    NEXUS is an emerging data-intensive analysis framework developed with a new approach for handling science data that enables large-scale data analysis. It is available through open source. We compare performance of NEXUS and Giovanni for 3 statistics algorithms applied to NASA datasets. Giovanni is a statistics web service at NASA Distributed Active Archive Centers (DAACs). NEXUS is a cloud-computing environment developed at JPL and built on Apache Solr, Cassandra, and Spark. We compute global time-averaged map, correlation map, and area-averaged time series. The first two algorithms average over time to produce a value for each pixel in a 2-D map. The third algorithm averages spatially to produce a single value for each time step. This talk is our report on benchmark comparison findings that indicate 15x speedup with NEXUS over Giovanni to compute area-averaged time series of daily precipitation rate for the Tropical Rainfall Measuring Mission (TRMM with 0.25 degree spatial resolution) for the Continental United States over 14 years (2000-2014) with 64-way parallelism and 545 tiles per granule. 16-way parallelism with 16 tiles per granule worked best with NEXUS for computing an 18-year (1998-2015) TRMM daily precipitation global time averaged map (2.5 times speedup) and 18-year global map of correlation between TRMM daily precipitation and TRMM real time daily precipitation (7x speedup). These and other benchmark results will be presented along with key lessons learned in applying the NEXUS tiling approach to big data analytics in the cloud.

  11. Processing large remote sensing image data sets on Beowulf clusters

    USGS Publications Warehouse

    Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Schmidt, Gail

    2003-01-01

    High-performance computing is often concerned with the speed at which floating- point calculations can be performed. The architectures of many parallel computers and/or their network topologies are based on these investigations. Often, benchmarks resulting from these investigations are compiled with little regard to how a large dataset would move about in these systems. This part of the Beowulf study addresses that concern by looking at specific applications software and system-level modifications. Applications include an implementation of a smoothing filter for time-series data, a parallel implementation of the decision tree algorithm used in the Landcover Characterization project, a parallel Kriging algorithm used to fit point data collected in the field on invasive species to a regular grid, and modifications to the Beowulf project's resampling algorithm to handle larger, higher resolution datasets at a national scale. Systems-level investigations include a feasibility study on Flat Neighborhood Networks and modifications of that concept with Parallel File Systems.

  12. Broadband piezoelectric energy harvesting devices using multiple bimorphs with different operating frequencies.

    PubMed

    Xue, Huan; Hu, Yuantai; Wang, Qing-Ming

    2008-09-01

    This paper presents a novel approach for designing broadband piezoelectric harvesters by integrating multiple piezoelectric bimorphs (PBs) with different aspect ratios into a system. The effect of 2 connecting patterns among PBs, in series and in parallel, on improving energy harvesting performance is discussed. It is found for multifrequency spectra ambient vibrations: 1) the operating frequency band (OFB) of a harvesting structure can be widened by connecting multiple PBs with different aspect ratios in series; 2) the OFB of a harvesting structure can be shifted to the dominant frequency domain of the ambient vibrations by increasing or decreasing the number of PBs in parallel. Numerical results show that the OFB of the piezoelectric energy harvesting devices can be tailored by the connection patterns (i.e., in series and in parallel) among PBs.

  13. Parsing Flowcharts and Series-Parallel Graphs

    DTIC Science & Technology

    1978-11-01

    descriptions of the graph. This possible multiplicity is undesirable in most practical applications, a fact that makes parti%:ularly useful reduction...to parse TT networks, some of the features that make this parsing method useful in other cases are more natually introduced in the context of this...as Figure 4.5 shows. This multiplicity is due to the associativity of consecutive Two Terminal Series and Two Terminal Parallel compositions. In spite

  14. Hybrid Forecasting of Daily River Discharges Considering Autoregressive Heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Szolgayová, Elena Peksová; Danačová, Michaela; Komorniková, Magda; Szolgay, Ján

    2017-06-01

    It is widely acknowledged that in the hydrological and meteorological communities, there is a continuing need to improve the quality of quantitative rainfall and river flow forecasts. A hybrid (combined deterministic-stochastic) modelling approach is proposed here that combines the advantages offered by modelling the system dynamics with a deterministic model and a deterministic forecasting error series with a data-driven model in parallel. Since the processes to be modelled are generally nonlinear and the model error series may exhibit nonstationarity and heteroscedasticity, GARCH-type nonlinear time series models are considered here. The fitting, forecasting and simulation performance of such models have to be explored on a case-by-case basis. The goal of this paper is to test and develop an appropriate methodology for model fitting and forecasting applicable for daily river discharge forecast error data from the GARCH family of time series models. We concentrated on verifying whether the use of a GARCH-type model is suitable for modelling and forecasting a hydrological model error time series on the Hron and Morava Rivers in Slovakia. For this purpose we verified the presence of heteroscedasticity in the simulation error series of the KLN multilinear flow routing model; then we fitted the GARCH-type models to the data and compared their fit with that of an ARMA - type model. We produced one-stepahead forecasts from the fitted models and again provided comparisons of the model's performance.

  15. Parallel, staged opening switch power conditioning techniques for flux compression generator applications

    NASA Astrophysics Data System (ADS)

    Reinovsky, R. E.; Levi, P. S.; Bueck, J. C.; Goforth, J. H.

    The Air Force Weapons Laboratory, working jointly with Los Alamos National Laboratory, has conducted a series of experiments directed at exploring composite, or staged, switching techniques for use in opening switches in applications which require the conduction of very high currents (or current densities) with very low losses for relatively long times (several tens of microseconds), and the interruption of these currents in much shorter times (ultimately a few hundred nanoseconds). The results of those experiments are reported.

  16. [DORSALIS PEDIS FLAP SERIES-PARALLEL BIG TOE NAIL COMPOSITE TISSUE FLAP TO REPAIR HAND SKIN OF DEGLOVING INJURY WITH THUMB DEFECT].

    PubMed

    Shi, Pengju; Zhang, Wenlong; Zhao, Gang; Li, Zhigang; Zhao, Shaoping; Zhang, Tieshan

    2015-07-01

    To investigate the effectiveness of dorsalis pedis flap series-parallel big toe nail composite tissue flap in the repairment of hand skin of degloving injury with tumb defect. Between March 2009 and June 2013, 8 cases of hand degloving injury with thumb defect caused by machine twisting were treated. There were 7 males and 1 female with the mean age of 36 years (range, 26-48 years). Injury located at the left hand in 3 cases and at the right hand in 5 cases. The time from injury to hospitalization was 1.5-4.0 hours (mean, 2.5 hours). The defect area was 8 cm x 6 cm to 15 cm x 1 cm. The thumb defect was rated as degree I in 5 cases and as degree II in 3 cases. The contralateral dorsal skin flap (9 cm x 7 cm to 10 cm x 8 cm) combined with ipsilateral big toe nail composite tissue flap (2.5 cm x 1.8 cm to 3.0 cm x 2.0 cm) was used, including 3 parallel anastomosis flaps and 5 series anastomosis flaps. The donor site of the dorsal flap was repaired with thick skin grafts, the stumps wound was covered with tongue flap at the shank side of big toe. Vascular crisis occurred in 1 big toe nail composite tissue flap, margin necrosis occurred in 2 dorsalis pedis flap; the other flaps survived, and primary healing of wound was obtained. The grafted skin at dorsal donor site all survived, skin of hallux toe stump had no necrosis. Eight cases were followed up 4-20 months (mean, 15.5 months). All flaps had soft texture and satisfactory appearance; the cutaneous sensory recovery time was 4-7 months (mean, 5 months). At 4 months after operation, the two-point discrimination of the thumb pulp was 8-10 mm (mean, 9 mm), and the two-point discrimination of dorsal skin flap was 7-9 mm (mean, 8.5 mm). According to Society of Hand Surgery standard for the evaluation of upper part of the function, the results were excellent in 4 cases, good in 3 cases, and fair in 1 case. The donor foot had normal function. Dorsalis pedis flap series-parallel big toe nail composite tissue flap is an ideal way to repair hand skin defect, and reconstructs the thumb, which has many advantages, including simple surgical procedure, no limitation to recipient site, soft texture, satisfactory appearance and function of reconstructing thumb, and small donor foot loss.

  17. Optimal processor assignment for pipeline computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath

    1991-01-01

    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.

  18. A study on stimulation of DC high voltage power of LCC series parallel resonant in projectile velocity measurement system

    NASA Astrophysics Data System (ADS)

    Lu, Dong-dong; Gu, Jin-liang; Luo, Hong-e.; Xia, Yan

    2017-10-01

    According to specific requirements of the X-ray machine system for measuring velocity of outfield projectile, a DC high voltage power supply system is designed for the high voltage or the smaller current. The system comprises: a series resonant circuit is selected as a full-bridge inverter circuit; a high-frequency zero-current soft switching of a high-voltage power supply is realized by PWM output by STM32; a nanocrystalline alloy transformer is chosen as a high-frequency booster transformer; and the related parameters of an LCC series-parallel resonant are determined according to the preset parameters of the transformer. The concrete method includes: a LCC series parallel resonant circuit and a voltage doubling circuit are stimulated by using MULTISM and MATLAB; selecting an optimal solution and an optimal parameter of all parts after stimulation analysis; and finally verifying the correctness of the parameter by stimulation of the whole system. Through stimulation analysis, the output voltage of the series-parallel resonant circuit gets to 10KV in 28s: then passing through the voltage doubling circuit, the output voltage gets to 120KV in one hour. According to the system, the wave range of the output voltage is so small as to provide the stable X-ray supply for the X-ray machine for measuring velocity of outfield projectile. It is fast in charging and high in efficiency.

  19. Secondary School Mathematics, Chapter 13, Perpendiculars and Parallels (I), Chapter 14, Similarity. Student's Text.

    ERIC Educational Resources Information Center

    Stanford Univ., CA. School Mathematics Study Group.

    The first chapter of the seventh unit in this SMSG series discusses perpendiculars and parallels; topics covered include the relationship between parallelism and perpendicularity, rectangles, transversals, parallelograms, general triangles, and measurement of the circumference of the earth. The second chapter, on similarity, discusses scale…

  20. Efficient parallel algorithms for string editing and related problems

    NASA Technical Reports Server (NTRS)

    Apostolico, Alberto; Atallah, Mikhail J.; Larmore, Lawrence; Mcfaddin, H. S.

    1988-01-01

    The string editing problem for input strings x and y consists of transforming x into y by performing a series of weighted edit operations on x of overall minimum cost. An edit operation on x can be the deletion of a symbol from x, the insertion of a symbol in x or the substitution of a symbol x with another symbol. This problem has a well known O((absolute value of x)(absolute value of y)) time sequential solution (25). The efficient Program Requirements Analysis Methods (PRAM) parallel algorithms for the string editing problem are given. If m = ((absolute value of x),(absolute value of y)) and n = max((absolute value of x),(absolute value of y)), then the CREW bound is O (log m log n) time with O (mn/log m) processors. In all algorithms, space is O (mn).

  1. Reliable Early Classification on Multivariate Time Series with Numerical and Categorical Attributes

    DTIC Science & Technology

    2015-05-22

    design a procedure of feature extraction in REACT named MEG (Mining Equivalence classes with shapelet Generators) based on the concept of...Equivalence Classes Mining [12, 15]. MEG can efficiently and effectively generate the discriminative features. In addition, several strategies are proposed...technique of parallel computing [4] to propose a process of pa- rallel MEG for substantially reducing the computational overhead of discovering shapelet

  2. The Validation of Parallel Test Forms: "Mountain" and "Beach" Picture Series for Assessment of Language Skills

    ERIC Educational Resources Information Center

    Bae, Jungok; Lee, Yae-Sheik

    2011-01-01

    Pictures are widely used to elicit expressive language skills, and pictures must be established as parallel before changes in ability can be demonstrated by assessment using pictures prompts. Why parallel prompts are required and what it is necessary to do to ensure that prompts are in fact parallel is not widely known. To date, evidence of…

  3. Massive parallelization of serial inference algorithms for a complex generalized linear model

    PubMed Central

    Suchard, Marc A.; Simpson, Shawn E.; Zorych, Ivan; Ryan, Patrick; Madigan, David

    2014-01-01

    Following a series of high-profile drug safety disasters in recent years, many countries are redoubling their efforts to ensure the safety of licensed medical products. Large-scale observational databases such as claims databases or electronic health record systems are attracting particular attention in this regard, but present significant methodological and computational concerns. In this paper we show how high-performance statistical computation, including graphics processing units, relatively inexpensive highly parallel computing devices, can enable complex methods in large databases. We focus on optimization and massive parallelization of cyclic coordinate descent approaches to fit a conditioned generalized linear model involving tens of millions of observations and thousands of predictors in a Bayesian context. We find orders-of-magnitude improvement in overall run-time. Coordinate descent approaches are ubiquitous in high-dimensional statistics and the algorithms we propose open up exciting new methodological possibilities with the potential to significantly improve drug safety. PMID:25328363

  4. The MOLICEL(R) rechargeable lithium system: Multicell battery aspects

    NASA Technical Reports Server (NTRS)

    Fouchard, D.; Taylor, J. B.

    1987-01-01

    MOLICEL rechargeable lithium cells were cycled in batteries using series, parallel, and series/parallel connections. The individual cell voltages and branch currents were measured to understand the cell interactions. The observations were interpreted in terms of the inherent characteristics of the Li/MoS2 system and in terms of a singular cell failure mode. The results confirm that correctly configured multicell batteries using MOLICELs have performance characteristics comparable to those of single cells.

  5. Physics Mining of Multi-Source Data Sets

    NASA Technical Reports Server (NTRS)

    Helly, John; Karimabadi, Homa; Sipes, Tamara

    2012-01-01

    Powerful new parallel data mining algorithms can produce diagnostic and prognostic numerical models and analyses from observational data. These techniques yield higher-resolution measures than ever before of environmental parameters by fusing synoptic imagery and time-series measurements. These techniques are general and relevant to observational data, including raster, vector, and scalar, and can be applied in all Earth- and environmental science domains. Because they can be highly automated and are parallel, they scale to large spatial domains and are well suited to change and gap detection. This makes it possible to analyze spatial and temporal gaps in information, and facilitates within-mission replanning to optimize the allocation of observational resources. The basis of the innovation is the extension of a recently developed set of algorithms packaged into MineTool to multi-variate time-series data. MineTool is unique in that it automates the various steps of the data mining process, thus making it amenable to autonomous analysis of large data sets. Unlike techniques such as Artificial Neural Nets, which yield a blackbox solution, MineTool's outcome is always an analytical model in parametric form that expresses the output in terms of the input variables. This has the advantage that the derived equation can then be used to gain insight into the physical relevance and relative importance of the parameters and coefficients in the model. This is referred to as physics-mining of data. The capabilities of MineTool are extended to include both supervised and unsupervised algorithms, handle multi-type data sets, and parallelize it.

  6. Cloud masking and removal in remote sensing image time series

    NASA Astrophysics Data System (ADS)

    Gómez-Chova, Luis; Amorós-López, Julia; Mateo-García, Gonzalo; Muñoz-Marí, Jordi; Camps-Valls, Gustau

    2017-01-01

    Automatic cloud masking of Earth observation images is one of the first required steps in optical remote sensing data processing since the operational use and product generation from satellite image time series might be hampered by undetected clouds. The high temporal revisit of current and forthcoming missions and the scarcity of labeled data force us to cast cloud screening as an unsupervised change detection problem in the temporal domain. We introduce a cloud screening method based on detecting abrupt changes along the time dimension. The main assumption is that image time series follow smooth variations over land (background) and abrupt changes will be mainly due to the presence of clouds. The method estimates the background surface changes using the information in the time series. In particular, we propose linear and nonlinear least squares regression algorithms that minimize both the prediction and the estimation error simultaneously. Then, significant differences in the image of interest with respect to the estimated background are identified as clouds. The use of kernel methods allows the generalization of the algorithm to account for higher-order (nonlinear) feature relations. After the proposed cloud masking and cloud removal, cloud-free time series at high spatial resolution can be used to obtain a better monitoring of land cover dynamics and to generate more elaborated products. The method is tested in a dataset with 5-day revisit time series from SPOT-4 at high resolution and with Landsat-8 time series. Experimental results show that the proposed method yields more accurate cloud masks when confronted with state-of-the-art approaches typically used in operational settings. In addition, the algorithm has been implemented in the Google Earth Engine platform, which allows us to access the full Landsat-8 catalog and work in a parallel distributed platform to extend its applicability to a global planetary scale.

  7. High-Density Liquid-State Machine Circuitry for Time-Series Forecasting.

    PubMed

    Rosselló, Josep L; Alomar, Miquel L; Morro, Antoni; Oliver, Antoni; Canals, Vincent

    2016-08-01

    Spiking neural networks (SNN) are the last neural network generation that try to mimic the real behavior of biological neurons. Although most research in this area is done through software applications, it is in hardware implementations in which the intrinsic parallelism of these computing systems are more efficiently exploited. Liquid state machines (LSM) have arisen as a strategic technique to implement recurrent designs of SNN with a simple learning methodology. In this work, we show a new low-cost methodology to implement high-density LSM by using Boolean gates. The proposed method is based on the use of probabilistic computing concepts to reduce hardware requirements, thus considerably increasing the neuron count per chip. The result is a highly functional system that is applied to high-speed time series forecasting.

  8. Millennial-scale climate variations recorded in Early Pliocene colour reflectance time series from the lacustrine Ptolemais Basin (NW Greece)

    NASA Astrophysics Data System (ADS)

    Steenbrink, J.; Kloosterboer-van Hoeve, M. L.; Hilgen, F. J.

    2003-03-01

    Quaternary climate proxy records show compelling evidence for climate variability on time scales of a few thousand years. The causes for these millennial-scale or sub-Milankovitch cycles are still poorly understood, not least due to the complex feedback mechanisms of large ice sheets during the Quaternary. We present evidence of millennial-scale climate variability in Early Pliocene lacustrine sediments from the intramontane Ptolemais Basin in northwestern Greece. The sediments are well exposed in a series of open-pit lignite mines and exhibit a distinct millennial-scale sedimentary cyclicity of alternating lignites and lacustrine marl beds that resulted from precession-induced variations in climate. The higher-frequency, millennial-scale cyclicity is particularly prominent within the grey-coloured marl segment of individual cycles. A stratigraphic interval of ˜115 ka, covering five precession-induced sedimentary cycles, was studied in nine parallel sections from two open-pit lignite mines located several km apart. High-resolution colour reflectance records were used to quantify the within-cycle variability and to determine its lateral continuity. Much of the within-cycle variability could be correlated between the parallel sections, even in fine detail, which suggests that these changes reflect basin-wide variations in environmental conditions related to (regional) climate fluctuations. Interbedded volcanic ash beds demonstrate the synchronicity of these fluctuations and spectral analysis of the reflectance time series shows a significant concentration of within-cycle variability at periods of ˜11, ˜5.5 and ˜2 ka. The occurrence of variability at such time scales at times before the intensification of the Northern Hemisphere glaciation suggests that they cannot solely have resulted from internal ice-sheet dynamics. Possible candidates include harmonics or combination tones of the main orbital cycles, variations in solar output or periodic motions of the Earth and Moon.

  9. Webinar Presentation: Assessing Neurodevelopment in Parallel Animal and Human Studies

    EPA Pesticide Factsheets

    This presentation, Assessing Neurodevelopment in Parallel Animal and Human Studies, was given at the NIEHS/EPA Children's Centers 2015 Webinar Series: Interdisciplinary Approaches to Neurodevelopment held on Sept. 9, 2015.

  10. AMS 14C dating of lime mortar

    NASA Astrophysics Data System (ADS)

    Heinemeier, Jan; Jungner, Högne; Lindroos, Alf; Ringbom, Åsa; von Konow, Thorborg; Rud, Niels

    1997-03-01

    A method for refining lime mortar samples for 14C dating has been developed. It includes mechanical and chemical separation of mortar carbonate with optical control of the purity of the samples. The method has been applied to a large series of AMS datings on lime mortar from three medieval churches on the Åland Islands, Finland. The datings show convincing internal consistency and confine the construction time of the churches to AD 1280-1380 with a most probable date just before AD 1300. We have also applied the method to the controversial Newport Tower, Rhode Island, USA. Our mortar datings confine the building to colonial time in the 17th century and thus refute claims of Viking origin of the tower. For the churches, a parallel series of datings of organic (charcoal) inclusions in the mortar show less reliable results than the mortar samples, which is ascribed to poor association with the construction time.

  11. Mechanical Behavior of Collagen-Fibrin Co-Gels Reflects Transition From Series to Parallel Interactions With Increasing Collagen Content

    PubMed Central

    Lai, Victor K.; Lake, Spencer P.; Frey, Christina R.; Tranquillo, Robert T.; Barocas, Victor H.

    2012-01-01

    Fibrin and collagen, biopolymers occurring naturally in the body, are biomaterials commonly-used as scaffolds for tissue engineering. How collagen and fibrin interact to confer macroscopic mechanical properties in collagen-fibrin composite systems remains poorly understood. In this study, we formulated collagen-fibrin co-gels at different collagen-tofibrin ratios to observe changes in the overall mechanical behavior and microstructure. A modeling framework of a two-network system was developed by modifying our micro-scale model, considering two forms of interaction between the networks: (a) two interpenetrating but noninteracting networks (“parallel”), and (b) a single network consisting of randomly alternating collagen and fibrin fibrils (“series”). Mechanical testing of our gels show that collagen-fibrin co-gels exhibit intermediate properties (UTS, strain at failure, tangent modulus) compared to those of pure collagen and fibrin. The comparison with model predictions show that the parallel and series model cases provide upper and lower bounds, respectively, for the experimental data, suggesting that a combination of such interactions exists between the collagen and fibrin in co-gels. A transition from the series model to the parallel model occurs with increasing collagen content, with the series model best describing predominantly fibrin co-gels, and the parallel model best describing predominantly collagen co-gels. PMID:22482659

  12. The Sponge Resistor Model--A Hydrodynamic Analog to Illustrate Ohm's Law, the Resistor Equation R=?l/A, and Resistors in Series and Parallel

    ERIC Educational Resources Information Center

    Pfister, Hans

    2014-01-01

    Physics students encountering electric circuits for the first time often ask why adding more resistors to a circuit sometimes increases and sometimes decreases the resulting total resistance. It appears that these students have an inadequate understanding of current flow and resistance. Students who do not adopt a model of current, voltage, and…

  13. Biphoton Generation Driven by Spatial Light Modulation: Parallel-to-Series Conversion

    NASA Astrophysics Data System (ADS)

    Zhao, Luwei; Guo, Xianxin; Sun, Yuan; Su, Yumian; Loy, M. M. T.; Du, Shengwang

    2016-05-01

    We demonstrate the generation of narrowband biphotons with controllable temporal waveform by spontaneous four-wave mixing in cold atoms. In the group-delay regime, we study the dependence of the biphoton temporal waveform on the spatial profile of the pump laser beam. By using a spatial light modulator, we manipulate the spatial profile of the pump laser and map it onto the two-photon entangled temporal wave function. This parallel-to-series conversion (or spatial-to-temporal mapping) enables coding the parallel classical information of the pump spatial profile to the sequential temporal waveform of the biphoton quantum state. The work was supported by the Hong Kong RGC (Project No. 601113).

  14. Experimental characterization of a binary actuated parallel manipulator

    NASA Astrophysics Data System (ADS)

    Giuseppe, Carbone

    2016-05-01

    This paper describes the BAPAMAN (Binary Actuated Parallel MANipulator) series of parallel manipulators that has been conceived at Laboratory of Robotics and Mechatronics (LARM). Basic common characteristics of BAPAMAN series are described. In particular, it is outlined the use of a reduced number of active degrees of freedom, the use of design solutions with flexural joints and Shape Memory Alloy (SMA) actuators for achieving miniaturization, cost reduction and easy operation features. Given the peculiarities of BAPAMAN architecture, specific experimental tests have been proposed and carried out with the aim to validate the proposed design and to evaluate the practical operation performance and the characteristics of a built prototype, in particular, in terms of operation and workspace characteristics.

  15. Forward Period Analysis Method of the Periodic Hamiltonian System.

    PubMed

    Wang, Pengfei

    2016-01-01

    Using the forward period analysis (FPA), we obtain the period of a Morse oscillator and mathematical pendulum system, with the accuracy of 100 significant digits. From these results, the long-term [0, 1060] (time unit) solutions, ranging from the Planck time to the age of the universe, are computed reliably and quickly with a parallel multiple-precision Taylor series (PMT) scheme. The application of FPA to periodic systems can greatly reduce the computation time of long-term reliable simulations. This scheme provides an efficient way to generate reference solutions, against which long-term simulations using other schemes can be tested.

  16. DInSAR time series generation within a cloud computing environment: from ERS to Sentinel-1 scenario

    NASA Astrophysics Data System (ADS)

    Casu, Francesco; Elefante, Stefano; Imperatore, Pasquale; Lanari, Riccardo; Manunta, Michele; Zinno, Ivana; Mathot, Emmanuel; Brito, Fabrice; Farres, Jordi; Lengert, Wolfgang

    2013-04-01

    One of the techniques that will strongly benefit from the advent of the Sentinel-1 system is Differential SAR Interferometry (DInSAR), which has successfully demonstrated to be an effective tool to detect and monitor ground displacements with centimetre accuracy. The geoscience communities (volcanology, seismicity, …), as well as those related to hazard monitoring and risk mitigation, make extensively use of the DInSAR technique and they will take advantage from the huge amount of SAR data acquired by Sentinel-1. Indeed, such an information will successfully permit the generation of Earth's surface displacement maps and time series both over large areas and long time span. However, the issue of managing, processing and analysing the large Sentinel data stream is envisaged by the scientific community to be a major bottleneck, particularly during crisis phases. The emerging need of creating a common ecosystem in which data, results and processing tools are shared, is envisaged to be a successful way to address such a problem and to contribute to the information and knowledge spreading. The Supersites initiative as well as the ESA SuperSites Exploitation Platform (SSEP) and the ESA Cloud Computing Operational Pilot (CIOP) projects provide effective answers to this need and they are pushing towards the development of such an ecosystem. It is clear that all the current and existent tools for querying, processing and analysing SAR data are required to be not only updated for managing the large data stream of Sentinel-1 satellite, but also reorganized for quickly replying to the simultaneous and highly demanding user requests, mainly during emergency situations. This translates into the automatic and unsupervised processing of large amount of data as well as the availability of scalable, widely accessible and high performance computing capabilities. The cloud computing environment permits to achieve all of these objectives, particularly in case of spike and peak requests of processing resources linked to disaster events. This work aims at presenting a parallel computational model for the widely used DInSAR algorithm named as Small BAseline Subset (SBAS), which has been implemented within the cloud computing environment provided by the ESA-CIOP platform. This activity has resulted in developing a scalable, unsupervised, portable, and widely accessible (through a web portal) parallel DInSAR computational tool. The activity has rewritten and developed the SBAS application algorithm within a parallel system environment, i.e., in a form that allows us to benefit from multiple processing units. This requires the devising a parallel version of the SBAS algorithm and its subsequent implementation, implying additional complexity in algorithm designing and an efficient multi processor programming, with the final aim of a parallel performance optimization. Although the presented algorithm has been designed to work with Sentinel-1 data, it can also process other satellite SAR data (ERS, ENVISAT, CSK, TSX, ALOS). Indeed, the performance analysis of the implemented SBAS parallel version has been tested on the full ASAR archive (64 acquisitions) acquired over the Napoli Bay, a volcanic and densely urbanized area in Southern Italy. The full processing - from the raw data download to the generation of DInSAR time series - has been carried out by engaging 4 nodes, each one with 2 cores and 16 GB of RAM, and has taken about 36 hours, with respect to about 135 hours of the sequential version. Extensive analysis on other test areas significant from DInSAR and geophysical viewpoint will be presented. Finally, preliminary performance evaluation of the presented approach within the Sentinel-1 scenario will be provided.

  17. Enhancement of Giant Magneto-Impedance in Series Co-Rich Microwires for Low-Field Sensing Applications

    NASA Astrophysics Data System (ADS)

    Jiang, S. D.; Eggers, T.; Thiabgoh, O.; Xing, D. W.; Fang, W. B.; Sun, J. F.; Srikanth, H.; Phan, M. H.

    2018-02-01

    Two soft ferromagnetic Co68.25Fe4.25Si12.25B15.25 microwires with the same diameter of 50 ± 1 μm but different fabrication processes were placed in series and in parallel circuit configurations to investigate their giant magneto-impedance (GMI) responses in a frequency range of 1-100 MHz for low-field sensing applications. We show that, while the low-field GMI response is significantly reduced in the parallel configuration, it is greatly enhanced in the series connection. These results suggest that a highly sensitive GMI sensor can be designed by arranging multi-wires in a saw-shaped fashion to optimize the sensing area, and soldered together in series connection to maintain the excellent magnetic field sensitivity.

  18. Random-subset fitting of digital holograms for fast three-dimensional particle tracking [invited].

    PubMed

    Dimiduk, Thomas G; Perry, Rebecca W; Fung, Jerome; Manoharan, Vinothan N

    2014-09-20

    Fitting scattering solutions to time series of digital holograms is a precise way to measure three-dimensional dynamics of microscale objects such as colloidal particles. However, this inverse-problem approach is computationally expensive. We show that the computational time can be reduced by an order of magnitude or more by fitting to a random subset of the pixels in a hologram. We demonstrate our algorithm on experimentally measured holograms of micrometer-scale colloidal particles, and we show that 20-fold increases in speed, relative to fitting full frames, can be attained while introducing errors in the particle positions of 10 nm or less. The method is straightforward to implement and works for any scattering model. It also enables a parallelization strategy wherein random-subset fitting is used to quickly determine initial guesses that are subsequently used to fit full frames in parallel. This approach may prove particularly useful for studying rare events, such as nucleation, that can only be captured with high frame rates over long times.

  19. A review on battery thermal management in electric vehicle application

    NASA Astrophysics Data System (ADS)

    Xia, Guodong; Cao, Lei; Bi, Guanglong

    2017-11-01

    The global issues of energy crisis and air pollution have offered a great opportunity to develop electric vehicles. However, so far, cycle life of power battery, environment adaptability, driving range and charging time seems far to compare with the level of traditional vehicles with internal combustion engine. Effective battery thermal management (BTM) is absolutely essential to relieve this situation. This paper reviews the existing literature from two levels that are cell level and battery module level. For single battery, specific attention is paid to three important processes which are heat generation, heat transport, and heat dissipation. For large format cell, multi-scale multi-dimensional coupled models have been developed. This will facilitate the investigation on factors, such as local irreversible heat generation, thermal resistance, current distribution, etc., that account for intrinsic temperature gradients existing in cell. For battery module based on air and liquid cooling, series, series-parallel and parallel cooling configurations are discussed. Liquid cooling strategies, especially direct liquid cooling strategies, are reviewed and they may advance the battery thermal management system to a new generation.

  20. Multi-resonant electromagnetic shunt in base isolation for vibration damping and energy harvesting

    NASA Astrophysics Data System (ADS)

    Pei, Yalu; Liu, Yilun; Zuo, Lei

    2018-06-01

    This paper investigates multi-resonant electromagnetic shunts applied to base isolation for dual-function vibration damping and energy harvesting. Two multi-mode shunt circuit configurations, namely parallel and series, are proposed and optimized based on the H2 criteria. The root-mean-square (RMS) value of the relative displacement between the base and the primary structure is minimized. Practically, this will improve the safety of base-isolated buildings subjected the broad bandwidth ground acceleration. Case studies of a base-isolated building are conducted in both the frequency and time domains to investigate the effectiveness of multi-resonant electromagnetic shunts under recorded earthquake signals. It shows that both multi-mode shunt circuits outperform traditional single mode shunt circuits by suppressing the first and the second vibration modes simultaneously. Moreover, for the same stiffness ratio, the parallel shunt circuit is more effective at harvesting energy and suppressing vibration, and can more robustly handle parameter mistuning than the series shunt circuit. Furthermore, this paper discusses experimental validation of the effectiveness of multi-resonant electromagnetic shunts for vibration damping and energy harvesting on a scaled-down base isolation system.

  1. Recursive Algorithms for Real-Time Digital CR-RCn Pulse Shaping

    NASA Astrophysics Data System (ADS)

    Nakhostin, M.

    2011-10-01

    This paper reports on recursive algorithms for real-time implementation of CR-(RC)n filters in digital nuclear spectroscopy systems. The algorithms are derived by calculating the Z-transfer function of the filters for filter orders up to n=4 . The performances of the filters are compared with the performance of the conventional digital trapezoidal filter using a noise generator which separately generates pure series, 1/f and parallel noise. The results of our study enable one to select the optimum digital filter for different noise and rate conditions.

  2. Multicore: Fallout From a Computing Evolution (LBNL Summer Lecture Series)

    ScienceCinema

    Yelick, Kathy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)

    2018-05-07

    Summer Lecture Series 2008: Parallel computing used to be reserved for big science and engineering projects, but in two years that's all changed. Even laptops and hand-helds use parallel processors. Unfortunately, the software hasn't kept pace. Kathy Yelick, Director of the National Energy Research Scientific Computing Center at Berkeley Lab, describes the resulting chaos and the computing community's efforts to develop exciting applications that take advantage of tens or hundreds of processors on a single chip.

  3. Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters

    NASA Technical Reports Server (NTRS)

    Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.

    1989-01-01

    The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some sample results are compared to data obtained from testing hardware inverters.

  4. Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters

    NASA Technical Reports Server (NTRS)

    Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.

    1989-01-01

    The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some examples are compared to data obtained from testing hardware inverters.

  5. Tunable color parallel tandem organic light emitting devices with carbon nanotube and metallic sheet interlayers

    NASA Astrophysics Data System (ADS)

    Oliva, Jorge; Papadimitratos, Alexios; Desirena, Haggeo; De la Rosa, Elder; Zakhidov, Anvar A.

    2015-11-01

    Parallel tandem organic light emitting devices (OLEDs) were fabricated with transparent multiwall carbon nanotube sheets (MWCNT) and thin metal films (Al, Ag) as interlayers. In parallel monolithic tandem architecture, the MWCNT (or metallic films) interlayers are an active electrode which injects similar charges into subunits. In the case of parallel tandems with common anode (C.A.) of this study, holes are injected into top and bottom subunits from the common interlayer electrode; whereas in the configuration of common cathode (C.C.), electrons are injected into the top and bottom subunits. Both subunits of the tandem can thus be monolithically connected functionally in an active structure in which each subunit can be electrically addressed separately. Our tandem OLEDs have a polymer as emitter in the bottom subunit and a small molecule emitter in the top subunit. We also compared the performance of the parallel tandem with that of in series and the additional advantages of the parallel architecture over the in-series were: tunable chromaticity, lower voltage operation, and higher brightness. Finally, we demonstrate that processing of the MWCNT sheets as a common anode in parallel tandems is an easy and low cost process, since their integration as electrodes in OLEDs is achieved by simple dry lamination process.

  6. Effect on Non-Uniform Heat Generation on Thermionic Reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schock, Alfred

    The penalty resulting from non-uniform heat generation in a thermionic reactor is examined. Operation at sub-optimum cesium pressure is shown to reduce this penalty, but at the risk of a condition analogous to burnout. For high pressure diodes, a simple empirical correlation between current, voltage and heat flux is developed and used to analyze the performance penalty associated with two different heat flux profiles, for series-and parallel-connected converters. The results demonstrate that series-connected converters require much finer power flattening than parallel converters. For example, a ±10% variation in heat generation across a series array can result in a 25 tomore » 50% power penalty.« less

  7. Resonance-induced sensitivity enhancement method for conductivity sensors

    NASA Technical Reports Server (NTRS)

    Tai, Yu-Chong (Inventor); Shih, Chi-yuan (Inventor); Li, Wei (Inventor); Zheng, Siyang (Inventor)

    2009-01-01

    Methods and systems for improving the sensitivity of a variety of conductivity sensing devices, in particular capacitively-coupled contactless conductivity detectors. A parallel inductor is added to the conductivity sensor. The sensor with the parallel inductor is operated at a resonant frequency of the equivalent circuit model. At the resonant frequency, parasitic capacitances that are either in series or in parallel with the conductance (and possibly a series resistance) is substantially removed from the equivalent circuit, leaving a purely resistive impedance. An appreciably higher sensor sensitivity results. Experimental verification shows that sensitivity improvements of the order of 10,000-fold are possible. Examples of detecting particulates with high precision by application of the apparatus and methods of operation are described.

  8. Bypass apparatus and method for series connected energy storage devices

    DOEpatents

    Rouillard, Jean; Comte, Christophe; Daigle, Dominik

    2000-01-01

    A bypass apparatus and method for series connected energy storage devices. Each of the energy storage devices coupled to a common series connection has an associated bypass unit connected thereto in parallel. A current bypass unit includes a sensor which is coupled in parallel with an associated energy storage device or cell and senses an energy parameter indicative of an energy state of the cell, such as cell voltage. A bypass switch is coupled in parallel with the energy storage cell and operable between a non-activated state and an activated state. The bypass switch, when in the non-activated state, is substantially non-conductive with respect to current passing through the energy storage cell and, when in the activated state, provides a bypass current path for passing current to the series connection so as to bypass the associated cell. A controller controls activation of the bypass switch in response to the voltage of the cell deviating from a pre-established voltage setpoint. The controller may be included within the bypass unit or be disposed on a control platform external to the bypass unit. The bypass switch may, when activated, establish a permanent or a temporary bypass current path.

  9. dc properties of series-parallel arrays of Josephson junctions in an external magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewandowski, S.J.

    1991-04-01

    A detailed dc theory of superconducting multijunction interferometers has previously been developed by several authors for the case of parallel junction arrays. The theory is now extended to cover the case of a loop containing several junctions connected in series. The problem is closely associated with high-{ital T}{sub {ital c}} superconductors and their clusters of intrinsic Josephson junctions. These materials exhibit spontaneous interferometric effects, and there is no reason to assume that the intrinsic junctions form only parallel arrays. A simple formalism of phase states is developed in order to express the superconducting phase differences across the junctions forming amore » series array as functions of the phase difference across the weakest junction of the system, and to relate the differences in critical currents of the junctions to gaps in the allowed ranges of their phase functions. This formalism is used to investigate the energy states of the array, which in the case of different junctions are split and separated by energy barriers of height depending on the phase gaps. Modifications of the washboard model of a single junction are shown. Next a superconducting inductive loop containing a series array of two junctions is considered, and this model is used to demonstrate the transitions between phase states and the associated instabilities. Finally, the critical current of a parallel connection of two series arrays is analyzed and shown to be a multivalued function of the externally applied magnetic flux. The instabilities caused by the presence of intrinsic serial junctions in granular high-{ital T}{sub {ital c}} materials are pointed out as a potential source of additional noise.« less

  10. An application of HOMER and ACMANT for homogenising monthly precipitation records in Ireland

    NASA Astrophysics Data System (ADS)

    Coll, John; Curley, Mary; Domonkos, Peter; Aguilar, Enric; Walsh, Seamus; Sweeney, John

    2015-04-01

    Climate change studies based only on raw long-term data are potentially flawed due to the many breaks introduced from non-climatic sources. Consequently, accurate climate data is an essential prerequisite for basing climate related decision making on; and quality controlled, homogenised climate data are becoming integral to European Union Member State efforts to deliver climate services. Ireland has a good repository of monthly precipitation data at approximately 1900 locations stored in the Met Éireann database. The record length at individual precipitation stations varies greatly. However, an audit of the data established the continuous record length at each station and the number of missing months, and based on this two initial subsets of station series (n = 88 and n = 110) were identified for preliminary homogenisation efforts. The HOMER joint detection algorithm was applied to the combined network of these 198 longer station series on an Ireland-wide basis where contiguous intact monthly records ranged from ~40 to 71 years (1941 - 2010). HOMER detected 91 breaks in total in the country-wide series analysis distributed across 63 (~32%) of the 71 year series records analysed. In a separate approach, four sub-series clusters (n = 38 - 61) for the 1950 - 2010 period were used in a parallel analysis applying both ACMANT and HOMER to a regionalised split of the 198 series. By comparison ACMANT detected a considerably higher number of breaks across the four regional series clusters, 238 distributed across 123 (~62%) of the 61 year series records analysed. These preliminary results indicate a relatively high proportion of detected breaks in the series, a situation not generally reflected in observed later 20th century precipitation records across Europe (Domonkos, 2014). However, this elevated ratio of series with detected breaks (~32% in HOMER and ~62% in ACMANT) parallels the break detection rate in a recent analysis of series in the Netherlands (Buishand et al 2013). In the case of Ireland, the climate is even more markedly maritime than that of the Netherlands and the spatial correlations between the Irish series are high (>0.8). Therefore it is likely that both HOMER and ACMANT are detecting relatively small breaks in the series; e.g. the overall range of correction amplitudes derived by HOMER were small and only applied to sections of the corrected series. As Ireland has a relatively dense network of highly correlated station series, we anticipate continued high detection rates as the analysis is extended to incorporate a greater number of station series, and that the ongoing work will quantify the extent of any breaks in Ireland's monthly precipitation series. KEY WORDS: Ireland, precipitation, time series, homogenisation, HOMER, ACMANT. References Buishand, T.A., DeMartino, G., Spreeuw, J.N., Brandsma, T. (2013). Homogeneity of precipitation series in the Netherlands and their trends in the past century. International Journal of Climatology. 33:815-833 Domonkos, P. (2014). Homogenisation of precipitation time series with ACMANT. Theoretical and Applied Climatology. 118:1-2. DOI 10.1007/s00704-014-1298-5.

  11. Impact of the coupling effect and the configuration on a compact rectenna array

    NASA Astrophysics Data System (ADS)

    Rivière, J.; Douyere, A.; Luk, J. D. Lan Sun

    2014-10-01

    This paper proposes an experimental study of the coupling effect of a rectenna array. The rectifying antenna consists of a compact and efficient rectifying circuit in a series topology, coupled with a small metamaterial-inspired antenna. The measurements are investigated in the X plane on the rectenna array's behavior, with series and parallel DC- combining configuration of two and three spaced rectennas from 3 cm to 10 cm. This study shows that the maximum efficiency is reached for the series configuration, with a resistive load of 10 kQ. The optimal distance is not significant for series or parallel configuration. Then, a comparison between a rectenna array with non-optimal mutual coupling and a more traditional patch rectenna is performed. Finally, a practical application is tested to demonstrate the effectiveness of such small rectenna array.

  12. Unbiased Rare Event Sampling in Spatial Stochastic Systems Biology Models Using a Weighted Ensemble of Trajectories

    PubMed Central

    Donovan, Rory M.; Tapia, Jose-Juan; Sullivan, Devin P.; Faeder, James R.; Murphy, Robert F.; Dittrich, Markus; Zuckerman, Daniel M.

    2016-01-01

    The long-term goal of connecting scales in biological simulation can be facilitated by scale-agnostic methods. We demonstrate that the weighted ensemble (WE) strategy, initially developed for molecular simulations, applies effectively to spatially resolved cell-scale simulations. The WE approach runs an ensemble of parallel trajectories with assigned weights and uses a statistical resampling strategy of replicating and pruning trajectories to focus computational effort on difficult-to-sample regions. The method can also generate unbiased estimates of non-equilibrium and equilibrium observables, sometimes with significantly less aggregate computing time than would be possible using standard parallelization. Here, we use WE to orchestrate particle-based kinetic Monte Carlo simulations, which include spatial geometry (e.g., of organelles, plasma membrane) and biochemical interactions among mobile molecular species. We study a series of models exhibiting spatial, temporal and biochemical complexity and show that although WE has important limitations, it can achieve performance significantly exceeding standard parallel simulation—by orders of magnitude for some observables. PMID:26845334

  13. Fundamental physics issues of multilevel logic in developing a parallel processor.

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Anirban; Miki, Kazushi

    2007-06-01

    In the last century, On and Off physical switches, were equated with two decisions 0 and 1 to express every information in terms of binary digits and physically realize it in terms of switches connected in a circuit. Apart from memory-density increase significantly, more possible choices in particular space enables pattern-logic a reality, and manipulation of pattern would allow controlling logic, generating a new kind of processor. Neumann's computer is based on sequential logic, processing bits one by one. But as pattern-logic is generated on a surface, viewing whole pattern at a time is a truly parallel processing. Following Neumann's and Shannons fundamental thermodynamical approaches we have built compatible model based on series of single molecule based multibit logic systems of 4-12 bits in an UHV-STM. On their monolayer multilevel communication and pattern formation is experimentally verified. Furthermore, the developed intelligent monolayer is trained by Artificial Neural Network. Therefore fundamental weak interactions for the building of truly parallel processor are explored here physically and theoretically.

  14. Toward Wearable Energy Storage Devices: Paper-Based Biofuel Cells based on a Screen-Printing Array Structure.

    PubMed

    Shitanda, Isao; Momiyama, Misaki; Watanabe, Naoto; Tanaka, Tomohiro; Tsujimura, Seiya; Hoshi, Yoshinao; Itagaki, Masayuki

    2017-10-01

    A novel paper-based biofuel cell with a series/parallel array structure has been fabricated, in which the cell voltage and output power can easily be adjusted as required by printing. The output of the fabricated 4-series/4-parallel biofuel cell reached 0.97±0.02 mW at 1.4 V, which is the highest output power reported to date for a paper-based biofuel cell. This work contributes to the development of flexible, wearable energy storage device.

  15. Bi-directional series-parallel elastic actuator and overlap of the actuation layers.

    PubMed

    Furnémont, Raphaël; Mathijssen, Glenn; Verstraten, Tom; Lefeber, Dirk; Vanderborght, Bram

    2016-01-27

    Several robotics applications require high torque-to-weight ratio and energy efficient actuators. Progress in that direction was made by introducing compliant elements into the actuation. A large variety of actuators were developed such as series elastic actuators (SEAs), variable stiffness actuators and parallel elastic actuators (PEAs). SEAs can reduce the peak power while PEAs can reduce the torque requirement on the motor. Nonetheless, these actuators still cannot meet performances close to humans. To combine both advantages, the series parallel elastic actuator (SPEA) was developed. The principle is inspired from biological muscles. Muscles are composed of motor units, placed in parallel, which are variably recruited as the required effort increases. This biological principle is exploited in the SPEA, where springs (layers), placed in parallel, can be recruited one by one. This recruitment is performed by an intermittent mechanism. This paper presents the development of a SPEA using the MACCEPA principle with a self-closing mechanism. This actuator can deliver a bi-directional output torque, variable stiffness and reduced friction. The load on the motor can also be reduced, leading to a lower power consumption. The variable recruitment of the parallel springs can also be tuned in order to further decrease the consumption of the actuator for a given task. First, an explanation of the concept and a brief description of the prior work done will be given. Next, the design and the model of one of the layers will be presented. The working principle of the full actuator will then be given. At the end of this paper, experiments showing the electric consumption of the actuator will display the advantage of the SPEA over an equivalent stiff actuator.

  16. A study of DC-DC converters with MCT's for arcjet power supplies

    NASA Technical Reports Server (NTRS)

    Stuart, Thomas A.

    1994-01-01

    Many arcjet DC power supplies use PWM full bridge converters with large arrays of parallel FET's. This report investigates an alternative supply using a variable frequency series resonant converter with small arrays of parallel MCT's (metal oxide semiconductor controlled thyristors). The reasons for this approach are to: increase reliability by reducing the number of switching devices; and decrease the surface mounting area of the switching arrays. The variable frequency series resonant approach is used because the relatively slow switching speed of the MCT precludes the use of PWM. The 10 kW converter operated satisfactorily with an efficiency of over 91 percent. Test results indicate this efficiency could be increased further by additional optimization of the series resonant inductor.

  17. Combinatorial Reliability and Repair

    DTIC Science & Technology

    1992-07-01

    Press, Oxford, 1987. [2] G. Gordon and L. Traldi, Generalized activities and the Tutte polynomial, Discrete Math . 85 (1990), 167-176. [3] A. B. Huseby, A...Chromatic polynomials and network reliability, Discrete Math . 67 (1987), 57-79. [7] A. Satayanarayana and R. K. Wood, A linear-time algorithm for comput- ing...K-terminal reliability in series-parallel networks, SIAM J. Comput. 14 (1985), 818-832. [8] L. Traldi, Generalized activities and K-terminal reliability, Discrete Math . 96 (1991), 131-149. 4

  18. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  19. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on highmore » performance computing platforms.« less

  20. Photocapacitive image converter

    NASA Technical Reports Server (NTRS)

    Miller, W. E.; Sher, A.; Tsuo, Y. H. (Inventor)

    1982-01-01

    An apparatus for converting a radiant energy image into corresponding electrical signals including an image converter is described. The image converter includes a substrate of semiconductor material, an insulating layer on the front surface of the substrate, and an electrical contact on the back surface of the substrate. A first series of parallel transparent conductive stripes is on the insulating layer with a processing circuit connected to each of the conductive stripes for detecting the modulated voltages generated thereon. In a first embodiment of the invention, a modulated light stripe perpendicular to the conductive stripes scans the image converter. In a second embodiment a second insulating layer is deposited over the conductive stripes and a second series of parallel transparent conductive stripes perpendicular to the first series is on the second insulating layer. A different frequency current signal is applied to each of the second series of conductive stripes and a modulated image is applied to the image converter.

  1. Statistical properties of Charney-Hasegawa-Mima zonal flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Johan, E-mail: anderson.johan@gmail.com; Botha, G. J. J.

    2015-05-15

    A theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent plasma transport events in unforced zonal flows is provided within the Charney-Hasegawa-Mima (CHM) model. The governing equation is solved numerically with various prescribed density gradients that are designed to produce different configurations of parallel and anti-parallel streams. Long-lasting vortices form whose flow is governed by the zonal streams. It is found that the numerically generated PDFs can be matched with analytical predictions of PDFs based on the instanton method by removing the autocorrelations from the time series. In many instances, the statistics generated by the CHM dynamics relaxesmore » to Gaussian distributions for both the electrostatic and vorticity perturbations, whereas in areas with strong nonlinear interactions it is found that the PDFs are exponentially distributed.« less

  2. Phase coupling and synchrony in the spatiotemporal dynamics of muskrat and mink populations across Canada

    PubMed Central

    Haydon, D. T.; Stenseth, N. C.; Boyce, M. S.; Greenwood, P. E.

    2001-01-01

    Population ecologists have traditionally focused on the patterns and causes of population variation in the temporal domain for which a substantial body of practical analytic techniques have been developed. More recently, numerous studies have documented how populations may fluctuate synchronously over large spatial areas; analyses of such spatially extended time-series have started to provide additional clues regarding the causes of these population fluctuations and explanations for their synchronous occurrence. Here, we report on the development of a phase-based method for identifying coupling between temporally coincident but spatially distributed cyclic time-series, which we apply to the numbers of muskrat and mink recorded at 81 locations across Canada. The analysis reveals remarkable parallel clines in the strength of coupling between proximate populations of both species—declining from west to east—together with a corresponding increase in observed synchrony between these populations the further east they are located. PMID:11606729

  3. On psychobiology in psychoanalysis - salivary cortisol and secretory IgA as psychoanalytic process parameters

    PubMed Central

    Euler, Sebastian; Schimpf, Heinrich; Hennig, Jürgen; Brosig, Burkhard

    2005-01-01

    This study investigates the psychobiological impact of psychoanalysis in its four-hour setting. During a period of five weeks, 20 subsequent hours of psychoanalysis were evaluated, involving two patients and their analysts. Before and after each session, saliva samples were taken and analysed for cortisol (sCortisol) and secretory immunoglobuline A (sIgA). Four time-series (n=80 observations) resulted and were evaluated by "Pooled Time Series Analysis" (PTSA) for significant level changes and setting-mediated rhythms. Over all sessions, sCortisol levels were reduced and sIgA secretion augmented parallel to the analytic work. In one analytic dyad a significant rhythm within the four-hour setting was observed with an increase of sCortisol in sessions 2 and 3 of the week. Psychoanalysis may, therefore, have some psychobiological impact on patients and analysts alike and may modulate immunological and endocrinological processes. PMID:19742067

  4. Analysis of the thermal balance characteristics for multiple-connected piezoelectric transformers.

    PubMed

    Park, Joung-Hu; Cho, Bo-Hyung; Choi, Sung-Jin; Lee, Sang-Min

    2009-08-01

    Because the amount of power that a piezoelectric transformer (PT) can handle is limited, multiple connections of PTs are necessary for the power-capacity improvement of PT-applications. In the connection, thermal imbalance between the PTs should be prevented to avoid the thermal runaway of each PT. The thermal balance of the multiple-connected PTs is dominantly affected by the electrothermal characteristics of individual PTs. In this paper, the thermal balance of both parallel-parallel and parallel-series connections are analyzed by electrical model parameters. For quantitative analysis, the thermal-balance effects are estimated by the simulation of the mechanical loss ratio between the PTs. The analysis results show that with PTs of similar characteristics, the parallel-series connection has better thermal balance characteristics due to the reduced mechanical loss of the higher temperature PT. For experimental verification of the analysis, a hardware-prototype test of a Cs-Lp type 40 W adapter system with radial-vibration mode PTs has been performed.

  5. Kalman filter techniques for accelerated Cartesian dynamic cardiac imaging.

    PubMed

    Feng, Xue; Salerno, Michael; Kramer, Christopher M; Meyer, Craig H

    2013-05-01

    In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome, and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and signal-to-noise ratio. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view-sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. Copyright © 2012 Wiley Periodicals, Inc.

  6. Kalman Filter Techniques for Accelerated Cartesian Dynamic Cardiac Imaging

    PubMed Central

    Feng, Xue; Salerno, Michael; Kramer, Christopher M.; Meyer, Craig H.

    2012-01-01

    In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories, because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and SNR. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. PMID:22926804

  7. High efficiency integration of three-dimensional functional microdevices inside a microfluidic chip by using femtosecond laser multifoci parallel microfabrication

    NASA Astrophysics Data System (ADS)

    Xu, Bing; Du, Wen-Qiang; Li, Jia-Wen; Hu, Yan-Lei; Yang, Liang; Zhang, Chen-Chu; Li, Guo-Qiang; Lao, Zhao-Xin; Ni, Jin-Cheng; Chu, Jia-Ru; Wu, Dong; Liu, Su-Ling; Sugioka, Koji

    2016-01-01

    High efficiency fabrication and integration of three-dimension (3D) functional devices in Lab-on-a-chip systems are crucial for microfluidic applications. Here, a spatial light modulator (SLM)-based multifoci parallel femtosecond laser scanning technology was proposed to integrate microstructures inside a given ‘Y’ shape microchannel. The key novelty of our approach lies on rapidly integrating 3D microdevices inside a microchip for the first time, which significantly reduces the fabrication time. The high quality integration of various 2D-3D microstructures was ensured by quantitatively optimizing the experimental conditions including prebaking time, laser power and developing time. To verify the designable and versatile capability of this method for integrating functional 3D microdevices in microchannel, a series of microfilters with adjustable pore sizes from 12.2 μm to 6.7 μm were fabricated to demonstrate selective filtering of the polystyrene (PS) particles and cancer cells with different sizes. The filter can be cleaned by reversing the flow and reused for many times. This technology will advance the fabrication technique of 3D integrated microfluidic and optofluidic chips.

  8. Local Variation of Hashtag Spike Trains and Popularity in Twitter

    PubMed Central

    Sanlı, Ceyda; Lambiotte, Renaud

    2015-01-01

    We draw a parallel between hashtag time series and neuron spike trains. In each case, the process presents complex dynamic patterns including temporal correlations, burstiness, and all other types of nonstationarity. We propose the adoption of the so-called local variation in order to uncover salient dynamical properties, while properly detrending for the time-dependent features of a signal. The methodology is tested on both real and randomized hashtag spike trains, and identifies that popular hashtags present regular and so less bursty behavior, suggesting its potential use for predicting online popularity in social media. PMID:26161650

  9. The soliton transform and a possible application to nonlinear Alfven waves in space

    NASA Technical Reports Server (NTRS)

    Hada, T.; Hamilton, R. L.; Kennel, C. F.

    1993-01-01

    The inverse scattering transform (IST) based on the derivative nonlinear Schroedinger (DNLS) equation is applied to a complex time series of nonlinear Alfven wave data generated by numerical simulation. The IST describes the long-time evolution of quasi-parallel Alfven waves more efficiently than the Fourier transform, which is adapted to linear rather than nonlinear problems. When dissipation is added, so the conditions for the validity of the DNLS are not strictly satisfied, the IST continues to provide a compact description of the wavefield in terms of a small number of decaying envelope solitons.

  10. Topology of polymer chains under nanoscale confinement.

    PubMed

    Satarifard, Vahid; Heidari, Maziar; Mashaghi, Samaneh; Tans, Sander J; Ejtehadi, Mohammad Reza; Mashaghi, Alireza

    2017-08-24

    Spatial confinement limits the conformational space accessible to biomolecules but the implications for bimolecular topology are not yet known. Folded linear biopolymers can be seen as molecular circuits formed by intramolecular contacts. The pairwise arrangement of intra-chain contacts can be categorized as parallel, series or cross, and has been identified as a topological property. Using molecular dynamics simulations, we determine the contact order distributions and topological circuits of short semi-flexible linear and ring polymer chains with a persistence length of l p under a spherical confinement of radius R c . At low values of l p /R c , the entropy of the linear chain leads to the formation of independent contacts along the chain and accordingly, increases the fraction of series topology with respect to other topologies. However, at high l p /R c , the fraction of cross and parallel topologies are enhanced in the chain topological circuits with cross becoming predominant. At an intermediate confining regime, we identify a critical value of l p /R c , at which all topological states have equal probability. Confinement thus equalizes the probability of more complex cross and parallel topologies to the level of the more simple, non-cooperative series topology. Moreover, our topology analysis reveals distinct behaviours for ring- and linear polymers under weak confinement; however, we find no difference between ring- and linear polymers under strong confinement. Under weak confinement, ring polymers adopt parallel and series topologies with equal likelihood, while linear polymers show a higher tendency for series arrangement. The radial distribution analysis of the topology reveals a non-uniform effect of confinement on the topology of polymer chains, thereby imposing more pronounced effects on the core region than on the confinement surface. Additionally, our results reveal that over a wide range of confining radii, loops arranged in parallel and cross topologies have nearly the same contact orders. Such degeneracy implies that the kinetics and transition rates between the topological states cannot be solely explained by contact order. We expect these findings to be of general importance in understanding chaperone assisted protein folding, chromosome architecture, and the evolution of molecular folds.

  11. Describing, using 'recognition cones'. [parallel-series model with English-like computer program

    NASA Technical Reports Server (NTRS)

    Uhr, L.

    1973-01-01

    A parallel-serial 'recognition cone' model is examined, taking into account the model's ability to describe scenes of objects. An actual program is presented in an English-like language. The concept of a 'description' is discussed together with possible types of descriptive information. Questions regarding the level and the variety of detail are considered along with approaches for improving the serial representations of parallel systems.

  12. Exo-reversible staging of coolers in series and in parallel

    NASA Astrophysics Data System (ADS)

    Maytal, Ben-Zion

    2017-10-01

    Serial and parallel staging of exo-reversible coolers are formulated, analyzed and compared. The parallel staging includes an extensive parameter which is the proportion of combined stages. This extensive free parameter affects the intensive factors of specific power and figure of merit. Serial staging reduces the 1st Law efficiency and parallel staging improves the 2nd Law efficiency. Comparison of a parallel with a serial staging under common cooling capacity and cooling range, shows that it is always possible to find a parallel arrangement of lower specific power and more compact. Some results are demonstrated on staging of Joule-Thomson cryocoolers (below and above the Joule-Thomson inversion temperature).

  13. Mapping the structure of the world economy.

    PubMed

    Lenzen, Manfred; Kanemoto, Keiichiro; Moran, Daniel; Geschke, Arne

    2012-08-07

    We have developed a new series of environmentally extended multi-region input-output (MRIO) tables with applications in carbon, water, and ecological footprinting, and Life-Cycle Assessment, as well as trend and key driver analyses. Such applications have recently been at the forefront of global policy debates, such as about assigning responsibility for emissions embodied in internationally traded products. The new time series was constructed using advanced parallelized supercomputing resources, and significantly advances the previous state of art because of four innovations. First, it is available as a continuous 20-year time series of MRIO tables. Second, it distinguishes 187 individual countries comprising more than 15,000 industry sectors, and hence offers unsurpassed detail. Third, it provides information just 1-3 years delayed therefore significantly improving timeliness. Fourth, it presents MRIO elements with accompanying standard deviations in order to allow users to understand the reliability of data. These advances will lead to material improvements in the capability of applications that rely on input-output tables. The timeliness of information means that analyses are more relevant to current policy questions. The continuity of the time series enables the robust identification of key trends and drivers of global environmental change. The high country and sector detail drastically improves the resolution of Life-Cycle Assessments. Finally, the availability of information on uncertainty allows policy-makers to quantitatively judge the level of confidence that can be placed in the results of analyses.

  14. Equivalent circuit for the characterization of the resonance mode in piezoelectric systems

    NASA Astrophysics Data System (ADS)

    Fernández-Afonso, Y.; García-Zaldívar, O.; Calderón-Piñar, F.

    2015-12-01

    The impedance properties in polarized piezoelectric can be described by electric equivalent circuits. The classic circuit used in the literature to describe real systems is formed by one resistor (R), one inductance (L) and one capacitance C connected in series and one capacity (C0) connected in parallel with the formers. Nevertheless, the equation that describe the resonance and anti-resonance frequencies depends on a complex manner of R, L, C and C0. In this work is proposed a simpler model formed by one inductance (L) and one capacity (C) in series; one capacity (C0) in parallel; one resistor (RP) in parallel and one resistor (RS) in series with other components. Unlike the traditional circuit, the equivalent circuit elements in the proposed model can be simply determined by knowing the experimental values of the resonance frequency fr, anti-resonance frequency fa, impedance module at resonance frequency |Zr|, impedance module at anti-resonance frequency |Za| and low frequency capacitance C0, without fitting the impedance experimental data to the obtained equation.

  15. Effect of Cooling Units on the Performance of an Automotive Exhaust-Based Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Su, C. Q.; Zhu, D. C.; Deng, Y. D.; Wang, Y. P.; Liu, X.

    2017-05-01

    Currently, automotive exhaust-based thermoelectric generators (AETEGs) are a hot topic in energy recovery. In order to investigate the influence of coolant flow rate, coolant flow direction and cooling unit arrangement in the AETEG, a thermoelectric generator (TEG) model and a related test bench are constructed. Water cooling is adopted in this study. Due to the non-uniformity of the surface temperature of the heat source, the coolant flow direction would affect the output performance of the TEG. Changing the volumetric flow rate of coolant can increase the output power of multi-modules connected in series or/and parallel as it can improve the temperature uniformity of the cooling unit. Since the temperature uniformity of the cooling unit has a strong influence on the output power, two cooling units are connected in series or parallel to research the effect of cooling unit arrangements on the maximum output power of the TEG. Experimental and theoretical analyses reveal that the net output power is generally higher with cooling units connected in parallel than cooling units connected in series in the cooling system with two cooling units.

  16. Vibration energy harvesting using a piezoelectric circular diaphragm array.

    PubMed

    Wang, Wei; Yang, Tongqing; Chen, Xurui; Yao, Xi

    2012-09-01

    This paper presents a method for harvesting electric energy from mechanical vibration using a mechanically excited piezoelectric circular membrane array. The piezoelectric circular diaphragm array consists of four plates with series and parallel connection, and the electrical characteristics of the array are examined under dynamic conditions. With an optimal load resistor of 160 kΩ, an output power of 28 mW was generated from the array in series connection at 150 Hz under a prestress of 0.8 N and a vibration acceleration of 9.8 m/s(2), whereas a maximal output power of 27 mW can be obtained from the array in parallel connection through a resistive load of 11 kΩ under the same frequency, prestress, and acceleration conditions. The results show that using a piezoelectric circular diaphragm array can significantly increase the output of energy compared with the use of a single plate. By choosing an appropriate connection pattern (series or parallel connections) among the plates, the equivalent impedance of the energy harvesting devices can be tailored to meet the matched load of different applications for maximal power output.

  17. Numerical approach of the quantum circuit theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, J.J.B., E-mail: jaedsonfisica@hotmail.com; Duarte-Filho, G.C.; Almeida, F.A.G.

    2017-03-15

    In this paper we develop a numerical method based on the quantum circuit theory to approach the coherent electronic transport in a network of quantum dots connected with arbitrary topology. The algorithm was employed in a circuit formed by quantum dots connected each other in a shape of a linear chain (associations in series), and of a ring (associations in series, and in parallel). For both systems we compute two current observables: conductance and shot noise power. We find an excellent agreement between our numerical results and the ones found in the literature. Moreover, we analyze the algorithm efficiency formore » a chain of quantum dots, where the mean processing time exhibits a linear dependence with the number of quantum dots in the array.« less

  18. Multiple timescales of cyclical behaviour observed at two dome-forming eruptions

    NASA Astrophysics Data System (ADS)

    Lamb, Oliver D.; Varley, Nick R.; Mather, Tamsin A.; Pyle, David M.; Smith, Patrick J.; Liu, Emma J.

    2014-09-01

    Cyclic behaviour over a range of timescales is a well-documented feature of many dome-forming volcanoes, but has not previously been identified in high resolution seismic data from Volcán de Colima (Mexico). Using daily seismic count datasets from Volcán de Colima and Soufrière Hills volcano (Montserrat), this study explores parallels in the long-term behaviour of seismicity at two long-lived systems. Datasets are examined using multiple techniques, including Fast-Fourier Transform, Detrended Fluctuation Analysis and Probabilistic Distribution Analysis, and the comparison of results from two systems reveals interesting parallels in sub-surface processes operating at both systems. Patterns of seismicity at both systems reveal complex but broadly similar long-term temporal patterns with cycles on the order of ~ 50- to ~ 200-days. These patterns are consistent with previously published spectral analyses of SO2 flux time-series at Soufrière Hills volcano, and are attributed to variations in the movement of magma in each system. Detrended Fluctuation Analysis determined that both volcanic systems showed a systematic relationship between the number of seismic events and the relative ‘roughness' of the time-series, and explosions at Volcán de Colima showed a 1.5-2 year cycle; neither observation has a clear explanatory mechanism. At Volcán de Colima, analysis of repose intervals between seismic events shows long-term behaviour that responds to changes in activity at the system. Similar patterns for both volcanic systems suggest a common process or processes driving the observed signal but it is not clear from these results alone what those processes may be. Further attempts to model conduit processes at each volcano must account for the similarities and differences in activity within each system. The identification of some commonalities in the patterns of behaviour during long-lived dome-forming eruptions at andesitic volcanoes provides a motivation for investigating further use of time-series analysis as a monitoring tool.

  19. Massive Cloud Computing Processing of P-SBAS Time Series for Displacement Analyses at Large Spatial Scale

    NASA Astrophysics Data System (ADS)

    Casu, F.; de Luca, C.; Lanari, R.; Manunta, M.; Zinno, I.

    2016-12-01

    A methodology for computing surface deformation time series and mean velocity maps of large areas is presented. Our approach relies on the availability of a multi-temporal set of Synthetic Aperture Radar (SAR) data collected from ascending and descending orbits over an area of interest, and also permits to estimate the vertical and horizontal (East-West) displacement components of the Earth's surface. The adopted methodology is based on an advanced Cloud Computing implementation of the Differential SAR Interferometry (DInSAR) Parallel Small Baseline Subset (P-SBAS) processing chain which allows the unsupervised processing of large SAR data volumes, from the raw data (level-0) imagery up to the generation of DInSAR time series and maps. The presented solution, which is highly scalable, has been tested on the ascending and descending ENVISAT SAR archives, which have been acquired over a large area of Southern California (US) that extends for about 90.000 km2. Such an input dataset has been processed in parallel by exploiting 280 computing nodes of the Amazon Web Services Cloud environment. Moreover, to produce the final mean deformation velocity maps of the vertical and East-West displacement components of the whole investigated area, we took also advantage of the information available from external GPS measurements that permit to account for possible regional trends not easily detectable by DInSAR and to refer the P-SBAS measurements to an external geodetic datum. The presented results clearly demonstrate the effectiveness of the proposed approach that paves the way to the extensive use of the available ERS and ENVISAT SAR data archives. Furthermore, the proposed methodology can be particularly suitable to deal with the very huge data flow provided by the Sentinel-1 constellation, thus permitting to extend the DInSAR analyses at a nearly global scale. This work is partially supported by: the DPC-CNR agreement, the EPOS-IP project and the ESA GEP project.

  20. A Two-dimensional Version of the Niblett-Bostick Transformation for Magnetotelluric Interpretations

    NASA Astrophysics Data System (ADS)

    Esparza, F.

    2005-05-01

    An imaging technique for two-dimensional magnetotelluric interpretations is developed following the well known Niblett-Bostick transformation for one-dimensional profiles. The algorithm uses a Hopfield artificial neural network to process series and parallel magnetotelluric impedances along with their analytical influence functions. The adaptive, weighted average approximation preserves part of the nonlinearity of the original problem. No initial model in the usual sense is required for the recovery of a functional model. Rather, the built-in relationship between model and data considers automatically, all at the same time, many half spaces whose electrical conductivities vary according to the data. The use of series and parallel impedances, a self-contained pair of invariants of the impedance tensor, avoids the need to decide on best angles of rotation for TE and TM separations. Field data from a given profile can thus be fed directly into the algorithm without much processing. The solutions offered by the Hopfield neural network correspond to spatial averages computed through rectangular windows that can be chosen at will. Applications of the algorithm to simple synthetic models and to the COPROD2 data set illustrate the performance of the approximation.

  1. Internal viscoelastic loading in cat papillary muscle.

    PubMed Central

    Chiu, Y L; Ballou, E W; Ford, L E

    1982-01-01

    The passive mechanical properties of myocardium were defined by measuring force responses to rapid length ramps applied to unstimulated cat papillary muscles. The immediate force changes following these ramps recovered partially to their initial value, suggesting a series combination of viscous element and spring. Because the stretched muscle can bear force at rest, the viscous element must be in parallel with an additional spring. The instantaneous extension-force curves measured at different lengths were nonlinear, and could be made to superimpose by a simple horizontal shift. This finding suggests that the same spring was being measured at each length, and that this spring was in series with both the viscous element and its parallel spring (Voigt configuration), so that the parallel spring is held nearly rigid by the viscous element during rapid steps. The series spring in the passive muscle could account for most of the series elastic recoil in the active muscle, suggesting that the same spring is in series with both the contractile elements and the viscous element. It is postulated that the viscous element might be coupled to the contractile elements by a compliance, so that the load imposed on the contractile elements by the passive structures is viscoelastic rather than purely viscous. Such a viscoelastic load would give the muscle a length-independent, early diastolic restoring force. The possibility is discussed that the length-independent restoring force would allow some of the energy liberated during active shortening to be stored and released during relaxation. Images FIGURE 7 FIGURE 8 PMID:7171707

  2. Research on battery-operated electric road vehicles

    NASA Technical Reports Server (NTRS)

    Varpetian, V. S.

    1977-01-01

    Mathematical analysis of battery-operated electric vehicles is presented. Attention is focused on assessing the influence of the battery on the mechanical and dynamical characteristics of dc electric motors with series and parallel excitation, as well as on evaluating the influence of the excitation mode and speed control system on the performance of the battery. The superiority of series excitation over parallel excitation with respect to vehicle performance is demonstrated. It is also shown that pulsed control of the electric motor, as compared to potentiometric control, provides a more effective use of the battery and decreases the cost of recharging.

  3. Experimental Study of a Pack of Supercapacitors Used in Electric Vehicles.

    PubMed

    Mansour, Amari; Mohamed Hedi, Chabchoub; Faouzi, Bacha

    2017-01-01

    Electric vehicles have recently attracted research interest. An electric vehicle is composed of two energy sources, such as fuel cells and ultracapacitors, which are employed to provide, respectively, the steady-state and transient power demanded by the vehicle. A bidirectional DC-DC converter is needed to interface the ultracapacitor to a DC bus. The pack of ultracapacitor consists of many cells in series and possibly also in parallel. In this regard, this paper introduces a comparative study between two packs of supercapacitors. The first supercapacitor pack is composed of ten cells in series but the second supercapacitor pack is composed of five cells in series and two parallel circuits. Each cell is characterized by 2.5 V and 100 F. A number of practical tests are presented.

  4. Experimental Study of a Pack of Supercapacitors Used in Electric Vehicles

    PubMed Central

    Mohamed Hedi, Chabchoub

    2017-01-01

    Electric vehicles have recently attracted research interest. An electric vehicle is composed of two energy sources, such as fuel cells and ultracapacitors, which are employed to provide, respectively, the steady-state and transient power demanded by the vehicle. A bidirectional DC-DC converter is needed to interface the ultracapacitor to a DC bus. The pack of ultracapacitor consists of many cells in series and possibly also in parallel. In this regard, this paper introduces a comparative study between two packs of supercapacitors. The first supercapacitor pack is composed of ten cells in series but the second supercapacitor pack is composed of five cells in series and two parallel circuits. Each cell is characterized by 2.5 V and 100 F. A number of practical tests are presented. PMID:28894785

  5. Low bias negative differential conductance and reversal of current in coupled quantum dots in different topological configurations

    NASA Astrophysics Data System (ADS)

    Devi, Sushila; Brogi, B. B.; Ahluwalia, P. K.; Chand, S.

    2018-06-01

    Electronic transport through asymmetric parallel coupled quantum dot system hybridized between normal leads has been investigated theoretically in the Coulomb blockade regime by using Non-Equilibrium Green Function formalism. A new decoupling scheme proposed by Rabani and his co-workers has been adopted to close the chain of higher order Green's functions appearing in the equations of motion. For resonant tunneling case; the calculations of current and differential conductance have been presented during transition of coupled quantum dot system from series to symmetric parallel configuration. It has been found that during this transition, increase in current and differential conductance of the system occurs. Furthermore, clear signatures of negative differential conductance and negative current appear in series case, both of which disappear when topology of system is tuned to asymmetric parallel configuration.

  6. Optimal Reorganization of NASA Earth Science Data for Enhanced Accessibility and Usability for the Hydrology Community

    NASA Technical Reports Server (NTRS)

    Teng, William; Rui, Hualan; Strub, Richard; Vollmer, Bruce

    2016-01-01

    A long-standing "Digital Divide" in data representation exists between the preferred way of data access by the hydrology community and the common way of data archival by earth science data centers. Typically, in hydrology, earth surface features are expressed as discrete spatial objects (e.g., watersheds), and time-varying data are contained in associated time series. Data in earth science archives, although stored as discrete values (of satellite swath pixels or geographical grids), represent continuous spatial fields, one file per time step. This Divide has been an obstacle, specifically, between the Consortium of Universities for the Advancement of Hydrologic Science, Inc. and NASA earth science data systems. In essence, the way data are archived is conceptually orthogonal to the desired method of access. Our recent work has shown an optimal method of bridging the Divide, by enabling operational access to long-time series (e.g., 36 years of hourly data) of selected NASA datasets. These time series, which we have termed "data rods," are pre-generated or generated on-the-fly. This optimal solution was arrived at after extensive investigations of various approaches, including one based on "data curtains." The on-the-fly generation of data rods uses "data cubes," NASA Giovanni, and parallel processing. The optimal reorganization of NASA earth science data has significantly enhanced the access to and use of the data for the hydrology user community.

  7. Flat-plate photovoltaic array design optimization

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1980-01-01

    An analysis is presented which integrates the results of specific studies in the areas of photovoltaic structural design optimization, optimization of array series/parallel circuit design, thermal design optimization, and optimization of environmental protection features. The analysis is based on minimizing the total photovoltaic system life-cycle energy cost including repair and replacement of failed cells and modules. This approach is shown to be a useful technique for array optimization, particularly when time-dependent parameters such as array degradation and maintenance are involved.

  8. Measuring Feedforward Inhibition and Its Impact on Local Circuit Function.

    PubMed

    Hull, Court

    2017-05-01

    This protocol describes a series of approaches to measure feedforward inhibition in acute brain slices from the cerebellar cortex. Using whole-cell voltage and current clamp recordings from Purkinje cells in conjunction with electrical stimulation of the parallel fibers, these methods demonstrate how to measure the relationship between excitation and inhibition in a feedforward circuit. This protocol also describes how to measure the impact of feedforward inhibition on Purkinje cell excitability, with an emphasis on spike timing. © 2017 Cold Spring Harbor Laboratory Press.

  9. Silicon-fiber blanket solar-cell array concept

    NASA Technical Reports Server (NTRS)

    Eliason, J. T.

    1973-01-01

    Proposed economical manufacture of solar-cell arrays involves parallel, planar weaving of filaments made of doped silicon fibers with diffused radial junction. Each filament is a solar cell connected either in series or parallel with others to form a blanket of deposited grids or attached electrode wire mesh screens.

  10. Design of a switch matrix gate/bulk driver controller for thin film lithium microbatteries using microwave SOI technology

    NASA Technical Reports Server (NTRS)

    Whitacre, J.; West, W. C.; Mojarradi, M.; Sukumar, V.; Hess, H.; Li, H.; Buck, K.; Cox, D.; Alahmad, M.; Zghoul, F. N.; hide

    2003-01-01

    This paper presents a design approach to help attain any random grouping pattern between the microbatteries. In this case, the result is an ability to charge microbatteries in parallel and to discharge microbatteries in parallel or pairs of microbatteries in series.

  11. Fast generation of computer-generated hologram by graphics processing unit

    NASA Astrophysics Data System (ADS)

    Matsuda, Sho; Fujii, Tomohiko; Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2009-02-01

    A cylindrical hologram is well known to be viewable in 360 deg. This hologram depends high pixel resolution.Therefore, Computer-Generated Cylindrical Hologram (CGCH) requires huge calculation amount.In our previous research, we used look-up table method for fast calculation with Intel Pentium4 2.8 GHz.It took 480 hours to calculate high resolution CGCH (504,000 x 63,000 pixels and the average number of object points are 27,000).To improve quality of CGCH reconstructed image, fringe pattern requires higher spatial frequency and resolution.Therefore, to increase the calculation speed, we have to change the calculation method. In this paper, to reduce the calculation time of CGCH (912,000 x 108,000 pixels), we employ Graphics Processing Unit (GPU).It took 4,406 hours to calculate high resolution CGCH on Xeon 3.4 GHz.Since GPU has many streaming processors and a parallel processing structure, GPU works as the high performance parallel processor.In addition, GPU gives max performance to 2 dimensional data and streaming data.Recently, GPU can be utilized for the general purpose (GPGPU).For example, NVIDIA's GeForce7 series became a programmable processor with Cg programming language.Next GeForce8 series have CUDA as software development kit made by NVIDIA.Theoretically, calculation ability of GPU is announced as 500 GFLOPS. From the experimental result, we have achieved that 47 times faster calculation compared with our previous work which used CPU.Therefore, CGCH can be generated in 95 hours.So, total time is 110 hours to calculate and print the CGCH.

  12. Design and Fabrication of TES Detector Modules for the TIME-Pilot [CII] Intensity Mapping Experiment

    NASA Astrophysics Data System (ADS)

    Hunacek, J.; Bock, J.; Bradford, C. M.; Bumble, B.; Chang, T.-C.; Cheng, Y.-T.; Cooray, A.; Crites, A.; Hailey-Dunsheath, S.; Gong, Y.; Kenyon, M.; Koch, P.; Li, C.-T.; O'Brient, R.; Shirokoff, E.; Shiu, C.; Staniszewski, Z.; Uzgil, B.; Zemcov, M.

    2016-08-01

    We are developing a series of close-packed modular detector arrays for TIME-Pilot, a new mm-wavelength grating spectrometer array that will map the intensity fluctuations of the redshifted 157.7 \\upmu m emission line of singly ionized carbon ([CII]) from redshift z ˜ 5 to 9. TIME-Pilot's two banks of 16 parallel-plate waveguide spectrometers (one bank per polarization) will have a spectral range of 183-326 GHz and a resolving power of R ˜ 100. The spectrometers use a curved diffraction grating to disperse and focus the light on a series of output arcs, each sampled by 60 transition edge sensor (TES) bolometers with gold micro-mesh absorbers. These low-noise detectors will be operated from a 250 mK base temperature and are designed to have a background-limited NEP of {˜ }10^{-17} mathrm {W}/mathrm {Hz}^{1/2}. This proceeding presents an overview of the detector design in the context of the TIME-Pilot instrument. Additionally, a prototype detector module produced at the Microdevices Laboratory at JPL is shown.

  13. GRASS GIS: The first Open Source Temporal GIS

    NASA Astrophysics Data System (ADS)

    Gebbert, Sören; Leppelt, Thomas

    2015-04-01

    GRASS GIS is a full featured, general purpose Open Source geographic information system (GIS) with raster, 3D raster and vector processing support[1]. Recently, time was introduced as a new dimension that transformed GRASS GIS into the first Open Source temporal GIS with comprehensive spatio-temporal analysis, processing and visualization capabilities[2]. New spatio-temporal data types were introduced in GRASS GIS version 7, to manage raster, 3D raster and vector time series. These new data types are called space time datasets. They are designed to efficiently handle hundreds of thousands of time stamped raster, 3D raster and vector map layers of any size. Time stamps can be defined as time intervals or time instances in Gregorian calendar time or relative time. Space time datasets are simplifying the processing and analysis of large time series in GRASS GIS, since these new data types are used as input and output parameter in temporal modules. The handling of space time datasets is therefore equal to the handling of raster, 3D raster and vector map layers in GRASS GIS. A new dedicated Python library, the GRASS GIS Temporal Framework, was designed to implement the spatio-temporal data types and their management. The framework provides the functionality to efficiently handle hundreds of thousands of time stamped map layers and their spatio-temporal topological relations. The framework supports reasoning based on the temporal granularity of space time datasets as well as their temporal topology. It was designed in conjunction with the PyGRASS [3] library to support parallel processing of large datasets, that has a long tradition in GRASS GIS [4,5]. We will present a subset of more than 40 temporal modules that were implemented based on the GRASS GIS Temporal Framework, PyGRASS and the GRASS GIS Python scripting library. These modules provide a comprehensive temporal GIS tool set. The functionality range from space time dataset and time stamped map layer management over temporal aggregation, temporal accumulation, spatio-temporal statistics, spatio-temporal sampling, temporal algebra, temporal topology analysis, time series animation and temporal topology visualization to time series import and export capabilities with support for NetCDF and VTK data formats. We will present several temporal modules that support parallel processing of raster and 3D raster time series. [1] GRASS GIS Open Source Approaches in Spatial Data Handling In Open Source Approaches in Spatial Data Handling, Vol. 2 (2008), pp. 171-199, doi:10.1007/978-3-540-74831-19 by M. Neteler, D. Beaudette, P. Cavallini, L. Lami, J. Cepicky edited by G. Brent Hall, Michael G. Leahy [2] Gebbert, S., Pebesma, E., 2014. A temporal GIS for field based environmental modeling. Environ. Model. Softw. 53, 1-12. [3] Zambelli, P., Gebbert, S., Ciolli, M., 2013. Pygrass: An Object Oriented Python Application Programming Interface (API) for Geographic Resources Analysis Support System (GRASS) Geographic Information System (GIS). ISPRS Intl Journal of Geo-Information 2, 201-219. [4] Löwe, P., Klump, J., Thaler, J. (2012): The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster, (Geophysical Research Abstracts Vol. 14, EGU2012-4491, 2012), General Assembly European Geosciences Union (Vienna, Austria 2012). [5] Akhter, S., Aida, K., Chemin, Y., 2010. "GRASS GIS on High Performance Computing with MPI, OpenMP and Ninf-G Programming Framework". ISPRS Conference, Kyoto, 9-12 August 2010

  14. Parallel-distributed mobile robot simulator

    NASA Astrophysics Data System (ADS)

    Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo

    1996-06-01

    The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.

  15. Parallel tempering for the traveling salesman problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Percus, Allon; Wang, Richard; Hyman, Jeffrey

    We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulationsmore » are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.« less

  16. Series Transmission Line Transformer

    DOEpatents

    Buckles, Robert A.; Booth, Rex; Yen, Boris T.

    2004-06-29

    A series transmission line transformer is set forth which includes two or more of impedance matched sets of at least two transmissions lines such as shielded cables, connected in parallel at one end ans series at the other in a cascading fashion. The cables are wound about a magnetic core. The series transmission line transformer (STLT) which can provide for higher impedance ratios and bandwidths, which is scalable, and which is of simpler design and construction.

  17. Reliability Analysis and Modeling of ZigBee Networks

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.

  18. Life Management Skills. Teacher's Guide [and Student Workbook]. Parallel Alternative Strategies for Students (PASS).

    ERIC Educational Resources Information Center

    Goldstein, Jeren; Walford, Sylvia

    This teacher's guide and student workbook are part of a series of supplementary curriculum packages presenting alternative methods and activities designed to meet the needs of Florida secondary students with mild disabilities or other special learning needs. The Life Management Skills PASS (Parallel Alternative Strategies for Students) teacher's…

  19. Language Workbook for Food Service.

    ERIC Educational Resources Information Center

    Mankoski, Linda C.

    This workbook parallels the manual, "Food Service" (see related note), and is designed to assist the language arts or foods service teacher in helping deaf students cope with problems of reading the parallel text. The language system used in this text is based upon the Roberts English Series, which uses a linguistic approach to teaching language…

  20. 10 CFR 434.402 - Building envelope assemblies and materials.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Design Requirements-Electric Systems and Equipment... be determined with due consideration of all major series and parallel heat flow paths through the... thermal transmittance of opaque elements of assemblies shall be determined using a series path procedure...

  1. 10 CFR 434.402 - Building envelope assemblies and materials.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Design Requirements-Electric Systems and Equipment... be determined with due consideration of all major series and parallel heat flow paths through the... thermal transmittance of opaque elements of assemblies shall be determined using a series path procedure...

  2. 10 CFR 434.402 - Building envelope assemblies and materials.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Design Requirements-Electric Systems and Equipment... be determined with due consideration of all major series and parallel heat flow paths through the... thermal transmittance of opaque elements of assemblies shall be determined using a series path procedure...

  3. 10 CFR 434.402 - Building envelope assemblies and materials.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Design Requirements-Electric Systems and Equipment... be determined with due consideration of all major series and parallel heat flow paths through the... thermal transmittance of opaque elements of assemblies shall be determined using a series path procedure...

  4. 10 CFR 434.402 - Building envelope assemblies and materials.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Design Requirements-Electric Systems and Equipment... be determined with due consideration of all major series and parallel heat flow paths through the... thermal transmittance of opaque elements of assemblies shall be determined using a series path procedure...

  5. Application of the Karhunen-Loeve transform temporal image filter to reduce noise in real-time cardiac cine MRI

    NASA Astrophysics Data System (ADS)

    Ding, Yu; Chung, Yiu-Cho; Raman, Subha V.; Simonetti, Orlando P.

    2009-06-01

    Real-time dynamic magnetic resonance imaging (MRI) typically sacrifices the signal-to-noise ratio (SNR) to achieve higher spatial and temporal resolution. Spatial and/or temporal filtering (e.g., low-pass filtering or averaging) of dynamic images improves the SNR at the expense of edge sharpness. We describe the application of a temporal filter for dynamic MR image series based on the Karhunen-Loeve transform (KLT) to remove random noise without blurring stationary or moving edges and requiring no training data. In this paper, we present several properties of this filter and their effects on filter performance, and propose an automatic way to find the filter cutoff based on the autocorrelation of the eigenimages. Numerical simulation and in vivo real-time cardiac cine MR image series spanning multiple cardiac cycles acquired using multi-channel sensitivity-encoded MRI, i.e., parallel imaging, are used to validate and demonstrate these properties. We found that in this application, the noise standard deviation was reduced to 42% of the original with no apparent image blurring by using the proposed filter cutoff. Greater noise reduction can be achieved by increasing the length of the image series. This advantage of KLT filtering provides flexibility in the form of another scan parameter to trade for SNR.

  6. Toward a comprehensive landscape vegetation monitoring framework

    NASA Astrophysics Data System (ADS)

    Kennedy, Robert; Hughes, Joseph; Neeti, Neeti; Larrue, Tara; Gregory, Matthew; Roberts, Heather; Ohmann, Janet; Kane, Van; Kane, Jonathan; Hooper, Sam; Nelson, Peder; Cohen, Warren; Yang, Zhiqiang

    2016-04-01

    Blossoming Earth observation resources provide great opportunity to better understand land vegetation dynamics, but also require new techniques and frameworks to exploit their potential. Here, I describe several parallel projects that leverage time-series Landsat imagery to describe vegetation dynamics at regional and continental scales. At the core of these projects are the LandTrendr algorithms, which distill time-series earth observation data into periods of consistent long or short-duration dynamics. In one approach, we built an integrated, empirical framework to blend these algorithmically-processed time-series data with field data and lidar data to ascribe yearly change in forest biomass across the US states of Washington, Oregon, and California. In a separate project, we expanded from forest-only monitoring to full landscape land cover monitoring over the same regional scale, including both categorical class labels and continuous-field estimates. In these and other projects, we apply machine-learning approaches to ascribe all changes in vegetation to driving processes such as harvest, fire, urbanization, etc., allowing full description of both disturbance and recovery processes and drivers. Finally, we are moving toward extension of these same techniques to continental and eventually global scales using Google Earth Engine. Taken together, these approaches provide one framework for describing and understanding processes of change in vegetation communities at broad scales.

  7. Rushed, unhappy, and drained: an experience sampling study of relations between time pressure, perceived control, mood, and emotional exhaustion in a group of accountants.

    PubMed

    Teuchmann, K; Totterdell, P; Parker, S K

    1999-01-01

    Experience sampling methodology was used to examine how work demands translate into acute changes in affective response and thence into chronic response. Seven accountants reported their reactions 3 times a day for 4 weeks on pocket computers. Aggregated analysis showed that mood and emotional exhaustion fluctuated in parallel with time pressure over time. Disaggregated time-series analysis confirmed the direct impact of high-demand periods on the perception of control, time pressure, and mood and the indirect impact on emotional exhaustion. A curvilinear relationship between time pressure and emotional exhaustion was shown. The relationships between work demands and emotional exhaustion changed between high-demand periods and normal working periods. The results suggest that enhancing perceived control may alleviate the negative effects of time pressure.

  8. A regional comparison of solar, heat pump, and solar-heat pump systems

    NASA Astrophysics Data System (ADS)

    Manton, B. E.; Mitchell, J. W.

    1982-08-01

    A comparative study of the thermal and economic performance of the parallel and series solar heat pump systems, stand alone solar and stand alone heat pump systems for residential space and domestic hot water heating for the U.S. using FCHART 4.0 is presented. Results show that the parallel solar heat pump system yields the greatest energy savings in the south. Very low cost collectors (50-150 dollars/sq m) are required for a series solar heat pump system in order for it to compete economically with the better of the parallel or solar systems. Conventional oil or gas furnaces need to have a seasonal efficiency of at least 70-85% in order to save as much primary energy as the best primary system in the northeast. In addition, the implications of these results for current or proposed federal tax credit measures are discussed.

  9. Real-Time Decentralized Neural Control via Backstepping for a Robotic Arm Powered by Industrial Servomotors.

    PubMed

    Vazquez, Luis A; Jurado, Francisco; Castaneda, Carlos E; Santibanez, Victor

    2018-02-01

    This paper presents a continuous-time decentralized neural control scheme for trajectory tracking of a two degrees of freedom direct drive vertical robotic arm. A decentralized recurrent high-order neural network (RHONN) structure is proposed to identify online, in a series-parallel configuration and using the filtered error learning law, the dynamics of the plant. Based on the RHONN subsystems, a local neural controller is derived via backstepping approach. The effectiveness of the decentralized neural controller is validated on a robotic arm platform, of our own design and unknown parameters, which uses industrial servomotors to drive the joints.

  10. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  11. A controlled time-series trial of clinical reminders: using computerized firm systems to make quality improvement research a routine part of mainstream practice.

    PubMed Central

    Goldberg, H. I.; Neighbor, W. E.; Cheadle, A. D.; Ramsey, S. D.; Diehr, P.; Gore, E.

    2000-01-01

    OBJECTIVE: To explore the feasibility of conducting unobtrusive interventional research in community practice settings by integrating firm-system techniques with time-series analysis of relational-repository data. STUDY SETTING: A satellite teaching clinic divided into two similar, but geographically separated, primary care group practices called firms. One firm was selected by chance to receive the study intervention. Forty-two providers and 2,655 patients participated. STUDY DESIGN: A nonrandomized controlled trial of computer-generated preventive reminders. Net effects were determined by quantitatively combining population-level data from parallel experimental and control interrupted time series extending over two-month baseline and intervention periods. DATA COLLECTION: Mean rates at which mammography, colorectal cancer screening, and cholesterol testing were performed on patients due to receive each maneuver at clinic visits were the trial's outcome measures. PRINCIPAL FINDINGS: Mammography performance increased on the experimental firm by 154 percent (0.24 versus 0.61, p = .03). No effect on fecal occult blood testing was observed. Cholesterol ordering decreased on both the experimental (0.18 versus 0.1 1, p = .02) and control firms (0.13 versus 0.07, p = .03) coincident with national guidelines retreating from recommending screening for young adults. A traditional uncontrolled interrupted time-series design would have incorrectly attributed the experimental-firm decrease to the introduction of reminders. The combined analysis properly indicated that no net prompting effect had occurred, as the difference between firms in cholesterol testing remained stochastically stable over time (0.05 versus 0.04, p = .75). A logistic-regression analysis applied to individual-level data produced equivalent findings. The trial incurred no supplementary data collection costs. CONCLUSIONS: The apparent validity and practicability of our reminder implementation study should encourage others to develop computerized firm systems capable of conducting controlled time-series trials. Images Fig. 1 PMID:10737451

  12. Solar Cell Modules with Parallel Oriented Interconnections

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Twenty-four solar modules, half of which were 48 cells in an all-series electrical configuration and half of a six parallel cells by eight series cells were provided. Upon delivery of environmentally tested modules, low power outputs were discovered. These low power modules were determined to have cracked cells which were thought to cause the low output power. The cracks tended to be linear or circular which were caused by different stressing mechanisms. These stressing mechanisms were fully explored. Efforts were undertaken to determine the causes of cell fracture. This resulted in module design and process modifications. The design and process changes were subsequently implemented in production.

  13. Nonpreemptive run-time scheduling issues on a multitasked, multiprogrammed multiprocessor with dependencies, bidimensional tasks, folding and dynamic graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Allan Ray

    1987-05-01

    Increases in high speed hardware have mandated studies in software techniques to exploit the parallel capabilities. This thesis examines the effects a run-time scheduler has on a multiprocessor. The model consists of directed, acyclic graphs, generated from serial FORTRAN benchmark programs by the parallel compiler Parafrase. A multitasked, multiprogrammed environment is created. Dependencies are generated by the compiler. Tasks are bidimensional, i.e., they may specify both time and processor requests. Processor requests may be folded into execution time by the scheduler. The graphs may arrive at arbitrary time intervals. The general case is NP-hard, thus, a variety of heuristics aremore » examined by a simulator. Multiprogramming demonstrates a greater need for a run-time scheduler than does monoprogramming for a variety of reasons, e.g., greater stress on the processors, a larger number of independent control paths, more variety in the task parameters, etc. The dynamic critical path series of algorithms perform well. Dynamic critical volume did not add much. Unfortunately, dynamic critical path maximizes turnaround time as well as throughput. Two schedulers are presented which balance throughput and turnaround time. The first requires classification of jobs by type; the second requires selection of a ratio value which is dependent upon system parameters. 45 refs., 19 figs., 20 tabs.« less

  14. The paradox of cooling streams in a warming world: Regional climate trends do not parallel variable local trends in stream temperature in the Pacific continental United States

    USGS Publications Warehouse

    Arismendi, Ivan; Johnson, Sherri; Dunham, Jason B.; Haggerty, Roy; Hockman-Wert, David

    2012-01-01

    Temperature is a fundamentally important driver of ecosystem processes in streams. Recent warming of terrestrial climates around the globe has motivated concern about consequent increases in stream temperature. More specifically, observed trends of increasing air temperature and declining stream flow are widely believed to result in corresponding increases in stream temperature. Here, we examined the evidence for this using long-term stream temperature data from minimally and highly human-impacted sites located across the Pacific continental United States. Based on hypothesized climate impacts, we predicted that we should find warming trends in the maximum, mean and minimum temperatures, as well as increasing variability over time. These predictions were not fully realized. Warming trends were most prevalent in a small subset of locations with longer time series beginning in the 1950s. More recent series of observations (1987-2009) exhibited fewer warming trends and more cooling trends in both minimally and highly human-influenced systems. Trends in variability were much less evident, regardless of the length of time series. Based on these findings, we conclude that our perspective of climate impacts on stream temperatures is clouded considerably by a lack of long-termdata on minimally impacted streams, and biased spatio-temporal representation of existing time series. Overall our results highlight the need to develop more mechanistic, process-based understanding of linkages between climate change, other human impacts and stream temperature, and to deploy sensor networks that will provide better information on trends in stream temperatures in the future.

  15. AdiosStMan: Parallelizing Casacore Table Data System using Adaptive IO System

    NASA Astrophysics Data System (ADS)

    Wang, R.; Harris, C.; Wicenec, A.

    2016-07-01

    In this paper, we investigate the Casacore Table Data System (CTDS) used in the casacore and CASA libraries, and methods to parallelize it. CTDS provides a storage manager plugin mechanism for third-party developers to design and implement their own CTDS storage managers. Having this in mind, we looked into various storage backend techniques that can possibly enable parallel I/O for CTDS by implementing new storage managers. After carrying on benchmarks showing the excellent parallel I/O throughput of the Adaptive IO System (ADIOS), we implemented an ADIOS based parallel CTDS storage manager. We then applied the CASA MSTransform frequency split task to verify the ADIOS Storage Manager. We also ran a series of performance tests to examine the I/O throughput in a massively parallel scenario.

  16. Spontaneous Hot Flow Anomalies at Quasi-Parallel Shocks: 2. Hybrid Simulations

    NASA Technical Reports Server (NTRS)

    Omidi, N.; Zhang, H.; Sibeck, D.; Turner, D.

    2013-01-01

    Motivated by recent THEMIS observations, this paper uses 2.5-D electromagnetic hybrid simulations to investigate the formation of Spontaneous Hot Flow Anomalies (SHFA) upstream of quasi-parallel bow shocks during steady solar wind conditions and in the absence of discontinuities. The results show the formation of a large number of structures along and upstream of the quasi-parallel bow shock. Their outer edges exhibit density and magnetic field enhancements, while their cores exhibit drops in density, magnetic field, solar wind velocity and enhancements in ion temperature. Using virtual spacecraft in the simulation, we show that the signatures of these structures in the time series data are very similar to those of SHFAs seen in THEMIS data and conclude that they correspond to SHFAs. Examination of the simulation data shows that SHFAs form as the result of foreshock cavitons interacting with the bow shock. Foreshock cavitons in turn form due to the nonlinear evolution of ULF waves generated by the interaction of the solar wind with the backstreaming ions. Because foreshock cavitons are an inherent part of the shock dissipation process, the formation of SHFAs is also an inherent part of the dissipation process leading to a highly non-uniform plasma in the quasi-parallel magnetosheath including large scale density and magnetic field cavities.

  17. Dissipation, Voltage Profile and Levy Dragon in a Special Ladder Network

    ERIC Educational Resources Information Center

    Ucak, C.

    2009-01-01

    A ladder network constructed by an elementary two-terminal network consisting of a parallel resistor-inductor block in series with a parallel resistor-capacitor block sometimes is said to have a non-dispersive dissipative response. This special ladder network is created iteratively by replacing the elementary two-terminal network in place of the…

  18. Parallels, How Many? Geometry Module for Use in a Mathematics Laboratory Setting.

    ERIC Educational Resources Information Center

    Brotherton, Sheila; And Others

    This is one of a series of geometry modules developed for use by secondary students in a laboratory setting. This module was conceived as an alternative approach to the usual practice of giving Euclid's parallel postulate and then mentioning that alternate postulates would lead to an alternate geometry or geometries. Instead, the student is led…

  19. A statistical analysis of flank eruptions on Etna volcano

    NASA Astrophysics Data System (ADS)

    Mulargia, Francesco; Tinti, Stefano; Boschi, Enzo

    1985-02-01

    A singularly complete record exists for the eruptive activity of Etna volcano. The time series of occurrence of flank eruptions in the period 1600-1980, in which the record is presumably complete, is found to follow a stationary Poisson process. A revision of the available data shows that eruption durations are rather well correlated with the estimates of the volume of lava flows. This implies that the magnitude of an eruption can be defined directly by its duration. Extreme value statistics are then applied to the time series, using duration as a dependent variable. The probability of occurrence of a very long (300 days) eruption is greater than 50% only in time intervals of the order of 50 years. The correlation found between duration and total output also allows estimation of the probability of occurrence of a major event which exceeds a given duration and total flow of lava. The composite probabilities do not differ considerably from the pure ones. Paralleling a well established application to seismic events, extreme value theory can be profitably used in volcanic risk estimates, provided that appropriate account is also taken of all other variables.

  20. In-Situ Three-Dimensional Shape Rendering from Strain Values Obtained Through Optical Fiber Sensors

    NASA Technical Reports Server (NTRS)

    Chan, Hon Man (Inventor); Parker, Jr., Allen R. (Inventor)

    2015-01-01

    A method and system for rendering the shape of a multi-core optical fiber or multi-fiber bundle in three-dimensional space in real time based on measured fiber strain data. Three optical fiber cores arc arranged in parallel at 120.degree. intervals about a central axis. A series of longitudinally co-located strain sensor triplets, typically fiber Bragg gratings, are positioned along the length of each fiber at known intervals. A tunable laser interrogates the sensors to detect strain on the fiber cores. Software determines the strain magnitude (.DELTA.L/L) for each fiber at a given triplet, but then applies beam theory to calculate curvature, beading angle and torsion of the fiber bundle, and from there it determines the shape of the fiber in s Cartesian coordinate system by solving a series of ordinary differential equations expanded from the Frenet-Serrat equations. This approach eliminates the need for computationally time-intensive curve-tilting and allows the three-dimensional shape of the optical fiber assembly to be displayed in real-time.

  1. Validating MODIS and Sentinel-2 NDVI Products at a Temperate Deciduous Forest Site Using Two Independent Ground-Based Sensors.

    PubMed

    Lange, Maximilian; Dechant, Benjamin; Rebmann, Corinna; Vohland, Michael; Cuntz, Matthias; Doktor, Daniel

    2017-08-11

    Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure.

  2. Validating MODIS and Sentinel-2 NDVI Products at a Temperate Deciduous Forest Site Using Two Independent Ground-Based Sensors

    PubMed Central

    Lange, Maximilian; Rebmann, Corinna; Cuntz, Matthias; Doktor, Daniel

    2017-01-01

    Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure. PMID:28800065

  3. Granger causality--statistical analysis under a configural perspective.

    PubMed

    von Eye, Alexander; Wiedermann, Wolfgang; Mun, Eun-Young

    2014-03-01

    The concept of Granger causality can be used to examine putative causal relations between two series of scores. Based on regression models, it is asked whether one series can be considered the cause for the second series. In this article, we propose extending the pool of methods available for testing hypotheses that are compatible with Granger causation by adopting a configural perspective. This perspective allows researchers to assume that effects exist for specific categories only or for specific sectors of the data space, but not for other categories or sectors. Configural Frequency Analysis (CFA) is proposed as the method of analysis from a configural perspective. CFA base models are derived for the exploratory analysis of Granger causation. These models are specified so that they parallel the regression models used for variable-oriented analysis of hypotheses of Granger causation. An example from the development of aggression in adolescence is used. The example shows that only one pattern of change in aggressive impulses over time Granger-causes change in physical aggression against peers.

  4. Mitigation of intra-channel nonlinearities using a frequency-domain Volterra series equalizer.

    PubMed

    Guiomar, Fernando P; Reis, Jacklyn D; Teixeira, António L; Pinto, Armando N

    2012-01-16

    We address the issue of intra-channel nonlinear compensation using a Volterra series nonlinear equalizer based on an analytical closed-form solution for the 3rd order Volterra kernel in frequency-domain. The performance of the method is investigated through numerical simulations for a single-channel optical system using a 20 Gbaud NRZ-QPSK test signal propagated over 1600 km of both standard single-mode fiber and non-zero dispersion shifted fiber. We carry on performance and computational effort comparisons with the well-known backward propagation split-step Fourier (BP-SSF) method. The alias-free frequency-domain implementation of the Volterra series nonlinear equalizer makes it an attractive approach to work at low sampling rates, enabling to surpass the maximum performance of BP-SSF at 2× oversampling. Linear and nonlinear equalization can be treated independently, providing more flexibility to the equalization subsystem. The parallel structure of the algorithm is also a key advantage in terms of real-time implementation.

  5. 40 CFR 63.10010 - What are my monitoring, installation, operation, and maintenance requirements?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... that emissions are controlled with a common control device or series of control devices, are discharged... parallel control devices or multiple series of control devices are discharged to the atmosphere through... quality control activities (including, as applicable, calibration checks and required zero and span...

  6. 40 CFR 63.10010 - What are my monitoring, installation, operation, and maintenance requirements?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... that emissions are controlled with a common control device or series of control devices, are discharged... parallel control devices or multiple series of control devices are discharged to the atmosphere through... quality control activities (including, as applicable, calibration checks and required zero and span...

  7. Library Statistics of Colleges and Universities, 1963-1964. Analytic Report.

    ERIC Educational Resources Information Center

    Samore, Theodore

    The series of analytic reports on management and salary data of the academic libraries, paralleling the series titled "Library Statistics of Colleges and Universities, Institutional Data," is continued by this publication. The statistical tables of this report are of value to administrators, librarians, and others because: (1) they help…

  8. Parallel closure theory for toroidally confined plasmas

    NASA Astrophysics Data System (ADS)

    Ji, Jeong-Young; Held, Eric D.

    2017-10-01

    We solve a system of general moment equations to obtain parallel closures for electrons and ions in an axisymmetric toroidal magnetic field. Magnetic field gradient terms are kept and treated using the Fourier series method. Assuming lowest order density (pressure) and temperature to be flux labels, the parallel heat flow, friction, and viscosity are expressed in terms of radial gradients of the lowest-order temperature and pressure, parallel gradients of temperature and parallel flow, and the relative electron-ion parallel flow velocity. Convergence of closure quantities is demonstrated as the number of moments and Fourier modes are increased. Properties of the moment equations in the collisionless limit are also discussed. Combining closures with fluid equations parallel mass flow and electric current are also obtained. Work in collaboration with the PSI Center and supported by the U.S. DOE under Grant Nos. DE-SC0014033, DE-SC0016256, and DE-FG02-04ER54746.

  9. Parallelization of ARC3D with Computer-Aided Tools

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; Hribar, Michelle; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    A series of efforts have been devoted to investigating methods of porting and parallelizing applications quickly and efficiently for new architectures, such as the SCSI Origin 2000 and Cray T3E. This report presents the parallelization of a CFD application, ARC3D, using the computer-aided tools, Cesspools. Steps of parallelizing this code and requirements of achieving better performance are discussed. The generated parallel version has achieved reasonably well performance, for example, having a speedup of 30 for 36 Cray T3E processors. However, this performance could not be obtained without modification of the original serial code. It is suggested that in many cases improving serial code and performing necessary code transformations are important parts for the automated parallelization process although user intervention in many of these parts are still necessary. Nevertheless, development and improvement of useful software tools, such as Cesspools, can help trim down many tedious parallelization details and improve the processing efficiency.

  10. Design Patterns to Achieve 300x Speedup for Oceanographic Analytics in the Cloud

    NASA Astrophysics Data System (ADS)

    Jacob, J. C.; Greguska, F. R., III; Huang, T.; Quach, N.; Wilson, B. D.

    2017-12-01

    We describe how we achieve super-linear speedup over standard approaches for oceanographic analytics on a cluster computer and the Amazon Web Services (AWS) cloud. NEXUS is an open source platform for big data analytics in the cloud that enables this performance through a combination of horizontally scalable data parallelism with Apache Spark and rapid data search, subset, and retrieval with tiled array storage in cloud-aware NoSQL databases like Solr and Cassandra. NEXUS is the engine behind several public portals at NASA and OceanWorks is a newly funded project for the ocean community that will mature and extend this capability for improved data discovery, subset, quality screening, analysis, matchup of satellite and in situ measurements, and visualization. We review the Python language API for Spark and how to use it to quickly convert existing programs to use Spark to run with cloud-scale parallelism, and discuss strategies to improve performance. We explain how partitioning the data over space, time, or both leads to algorithmic design patterns for Spark analytics that can be applied to many different algorithms. We use NEXUS analytics as examples, including area-averaged time series, time averaged map, and correlation map.

  11. High accuracy mantle convection simulation through modern numerical methods - II: realistic models and problems

    NASA Astrophysics Data System (ADS)

    Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang

    2017-08-01

    Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.

  12. Numerical Analysis of Ginzburg-Landau Models for Superconductivity.

    NASA Astrophysics Data System (ADS)

    Coskun, Erhan

    Thin film conventional, as well as High T _{c} superconductors of various geometric shapes placed under both uniform and variable strength magnetic field are studied using the universially accepted macroscopic Ginzburg-Landau model. A series of new theoretical results concerning the properties of solution is presented using the semi -discrete time-dependent Ginzburg-Landau equations, staggered grid setup and natural boundary conditions. Efficient serial algorithms including a novel adaptive algorithm is developed and successfully implemented for solving the governing highly nonlinear parabolic system of equations. Refinement technique used in the adaptive algorithm is based on modified forward Euler method which was also developed by us to ease the restriction on time step size for stability considerations. Stability and convergence properties of forward and modified forward Euler schemes are studied. Numerical simulations of various recent physical experiments of technological importance such as vortes motion and pinning are performed. The numerical code for solving time-dependent Ginzburg-Landau equations is parallelized using BlockComm -Chameleon and PCN. The parallel code was run on the distributed memory multiprocessors intel iPSC/860, IBM-SP1 and cluster of Sun Sparc workstations, all located at Mathematics and Computer Science Division, Argonne National Laboratory.

  13. Volumic visual perception: principally novel concept

    NASA Astrophysics Data System (ADS)

    Petrov, Valery

    1996-01-01

    The general concept of volumic view (VV) as a universal property of space is introduced. VV exists in every point of the universe where electromagnetic (EM) waves can reach and a point or a quasi-point receiver (detector) of EM waves can be placed. Classification of receivers is given for the first time. They are classified into three main categories: biological, man-made non-biological, and mathematically specified hypothetical receivers. The principally novel concept of volumic perception is introduced. It differs chiefly from the traditional concept which traces back to Euclid and pre-Euclidean times and much later to Leonardo da Vinci and Giovanni Battista della Porta's discoveries and practical stereoscopy as introduced by C. Wheatstone. The basic idea of novel concept is that humans and animals acquire volumic visual data flows in series rather than in parallel. In this case the brain is free from extremely sophisticated real time parallel processing of two volumic visual data flows in order to combine them. Such procedure seems hardly probable even for humans who are unable to combine two primitive static stereoscopic images in one quicker than in a few seconds. Some people are unable to perform this procedure at all.

  14. Nonlinear Dynamics of a Multistage Gear Transmission System with Multi-Clearance

    NASA Astrophysics Data System (ADS)

    Xiang, Ling; Zhang, Yue; Gao, Nan; Hu, Aijun; Xing, Jingtang

    The nonlinear torsional model of a multistage gear transmission system which consists of a planetary gear and two parallel gear stages is established with time-varying meshing stiffness, comprehensive gear error and multi-clearance. The nonlinear dynamic responses are analyzed by applying the reference of backlash bifurcation parameters. The motions of the system on the change of backlash are identified through global bifurcation diagram, largest Lyapunov exponent (LLE), FFT spectra, Poincaré maps, the phase diagrams and time series. The numerical results demonstrate that the system exhibits rich features of nonlinear dynamics such as the periodic motion, nonperiodic states and chaotic states. It is found that the sun-planet backlash has more complex effect on the system than the ring-planet backlash. The motions of the system with backlash of parallel gear are diverse including some different multi-periodic motions. Furthermore, the state of the system can change from chaos into quasi-periodic behavior, which means that the dynamic behavior of the system is composed of more stable components with the increase of the backlash. Correspondingly, the parameters of the system should be designed properly and controlled timely for better operation and enhancing the life of the system.

  15. Millennial-scale Climate Variations Recorded As Far Back As The Early Pliocene

    NASA Astrophysics Data System (ADS)

    Steenbrink, J.; Hilgen, F. J.; Lourens, L. J.

    Quaternary climate proxy records show compelling evidence for climate variability on time scales of a few thousand years. The causes for these millennial-scale or sub- Milankovitch cycles are yet poorly understood, not in the least due to the complex feedback mechanisms of large ice-sheets during the Quaternary. We present evidence of millennial-scale climate variability in Early Pliocene lacustrine sediments from the intramontane Ptolemais Basin in northwestern Greece. The sediments are well ex- posed in a series of open-pit lignite mines and exhibit a distinct m-scale sedimentary cyclicity of alternating lignites and lacustrine marl beds that result from precession- induced variations in climate. A higher-frequency cyclicity is particular prominent within the marl segment of individual cycles. A stratigraphic interval of~115 kyr, cov- ering five precession-induced sedimentary cycles, was studied in nine parallel sections from two quarries located several km apart. Colour reflectance records were used to quantify the within-cycle variability and to determine its lateral continuity. Much of the within-cycle variability could be correlated between the parallel sections, even in fine detail, which suggests that these changes reflect basin-wide variations in environ- mental conditions related to (regional) climate fluctuations. Interbedded volcanic ash beds demonstrate the synchronicity of these fluctuations and spectral analysis of the reflectance time series shows a significant concentration of variability at periods of ~11,~5.5 and~2 kyr. Their occurrence at times before the intensification of the North- ern Hemisphere glaciation suggests that they cannot solely have resulted from internal ice-sheet dynamics. Possible candidates include harmonics or combination tones of the main orbital cycles, variations in solar output or periodic motions of the Earth and moon.

  16. GPU Computing in Bayesian Inference of Realized Stochastic Volatility Model

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2015-01-01

    The realized stochastic volatility (RSV) model that utilizes the realized volatility as additional information has been proposed to infer volatility of financial time series. We consider the Bayesian inference of the RSV model by the Hybrid Monte Carlo (HMC) algorithm. The HMC algorithm can be parallelized and thus performed on the GPU for speedup. The GPU code is developed with CUDA Fortran. We compare the computational time in performing the HMC algorithm on GPU (GTX 760) and CPU (Intel i7-4770 3.4GHz) and find that the GPU can be up to 17 times faster than the CPU. We also code the program with OpenACC and find that appropriate coding can achieve the similar speedup with CUDA Fortran.

  17. Terahertz wide aperture reflection tomography.

    PubMed

    Pearce, Jeremy; Choi, Hyeokho; Mittleman, Daniel M; White, Jeff; Zimdars, David

    2005-07-01

    We describe a powerful imaging modality for terahertz (THz) radiation, THz wide aperture reflection tomography (WART). Edge maps of an object's cross section are reconstructed from a series of time-domain reflection measurements at different viewing angles. Each measurement corresponds to a parallel line projection of the object's cross section. The filtered backprojection algorithm is applied to recover the image from the projection data. To our knowledge, this is the first demonstration of a reflection computed tomography technique using electromagnetic waves. We demonstrate the capabilities of THz WART by imaging the cross sections of two test objects.

  18. JCell--a Java-based framework for inferring regulatory networks from time series data.

    PubMed

    Spieth, C; Supper, J; Streichert, F; Speer, N; Zell, A

    2006-08-15

    JCell is a Java-based application for reconstructing gene regulatory networks from experimental data. The framework provides several algorithms to identify genetic and metabolic dependencies based on experimental data conjoint with mathematical models to describe and simulate regulatory systems. Owing to the modular structure, researchers can easily implement new methods. JCell is a pure Java application with additional scripting capabilities and thus widely usable, e.g. on parallel or cluster computers. The software is freely available for download at http://www-ra.informatik.uni-tuebingen.de/software/JCell.

  19. Multispot single-molecule FRET: High-throughput analysis of freely diffusing molecules

    PubMed Central

    Panzeri, Francesco

    2017-01-01

    We describe an 8-spot confocal setup for high-throughput smFRET assays and illustrate its performance with two characteristic experiments. First, measurements on a series of freely diffusing doubly-labeled dsDNA samples allow us to demonstrate that data acquired in multiple spots in parallel can be properly corrected and result in measured sample characteristics consistent with those obtained with a standard single-spot setup. We then take advantage of the higher throughput provided by parallel acquisition to address an outstanding question about the kinetics of the initial steps of bacterial RNA transcription. Our real-time kinetic analysis of promoter escape by bacterial RNA polymerase confirms results obtained by a more indirect route, shedding additional light on the initial steps of transcription. Finally, we discuss the advantages of our multispot setup, while pointing potential limitations of the current single laser excitation design, as well as analysis challenges and their solutions. PMID:28419142

  20. Dielectric monitoring of carbon nanotube network formation in curing thermosetting nanocomposites

    NASA Astrophysics Data System (ADS)

    Battisti, A.; Skordos, A. A.; Partridge, I. K.

    2009-08-01

    This paper focuses on monitoring of carbon nanotube (CNT) network development during the cure of unsaturated polyester nanocomposites by means of electrical impedance spectroscopy. A phenomenological model of the dielectric response is developed using equivalent circuit analysis. The model comprises two parallel RC elements connected in series, each of them giving rise to a semicircular arc in impedance complex plane plots. An established inverse modelling methodology is utilized for the estimation of the parameters of the corresponding equivalent circuit. This allows a quantification of the evolution of two separate processes corresponding to the two parallel RC elements. The high frequency process, which is attributed to CNT aggregates, shows a monotonic decrease in characteristic time during the cure. In contrast, the low frequency process, which corresponds to inter-aggregate phenomena, shows a more complex behaviour explained by the interplay between conductive network development and the cross-linking of the polymer.

  1. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  2. Microelectromechanical filter formed from parallel-connected lattice networks of contour-mode resonators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wojciechowski, Kenneth E; Olsson, III, Roy H; Ziaei-Moayyed, Maryam

    2013-07-30

    A microelectromechanical (MEM) filter is disclosed which has a plurality of lattice networks formed on a substrate and electrically connected together in parallel. Each lattice network has a series resonant frequency and a shunt resonant frequency provided by one or more contour-mode resonators in the lattice network. Different types of contour-mode resonators including single input, single output resonators, differential resonators, balun resonators, and ring resonators can be used in MEM filter. The MEM filter can have a center frequency in the range of 10 MHz-10 GHz, with a filter bandwidth of up to about 1% when all of the latticemore » networks have the same series resonant frequency and the same shunt resonant frequency. The filter bandwidth can be increased up to about 5% by using unique series and shunt resonant frequencies for the lattice networks.« less

  3. Depth-Dependent Anisotropies of Amides and Sugar in Perpendicular and Parallel Sections of Articular Cartilage by Fourier Transform Infrared Imaging (FTIRI)

    PubMed Central

    Xia, Yang; Mittelstaedt, Daniel; Ramakrishnan, Nagarajan; Szarko, Matthew; Bidthanapally, Aruna

    2010-01-01

    Full thickness blocks of canine humeral cartilage were microtomed into both perpendicular sections and a series of 100 parallel sections, each 6 μm thick. Fourier Transform Infrared Imaging (FTIRI) was used to image each tissue section eleven times under different infrared polarizations (from 0° to 180° polarization states in 20° increments and with an additional 90° polarization), at a spatial resolution of 6.25 μm and a wavenumber step of 8 cm−1. With increasing depth from the articular surface, amide anisotropies increased in the perpendicular sections and decreased in the parallel sections. Both types of tissue sectioning identified a 90° difference between amide I and amide II in the superficial zone of cartilage. The fibrillar distribution in the parallel sections from the superficial zone was shown to not be random. Sugar had the greatest anisotropy in the upper part of the radial zone in the perpendicular sections. The depth-dependent anisotropic data were fitted with a theoretical equation that contained three signature parameters, which illustrate the arcade structure of collagens with the aid of a fibril model. Infrared imaging of both perpendicular and parallel sections provides the possibility of determining the three-dimensional macromolecular structures in articular cartilage. Being sensitive to the orientation of the macromolecular structure in healthy articular cartilage aids the prospect of detecting the early onset of the tissue degradation that may lead to pathological conditions such as osteoarthritis. PMID:21274999

  4. Location and characteristics of the reconnection X-line deduced from low-altitude satellite and radar observations

    NASA Technical Reports Server (NTRS)

    Lockwood, M.; Davis, C. J.; Smith, M. F.; Onsager, T. G.; Denig, W. F.

    1994-01-01

    We present an analysis of a cusp ion step observed between two poleward-moving events of enhanced ionospheric electron temperature. From the computed variation of the reconnection rate and the onset times of the associated ionospheric events, the distance between the satellite and the X-line can be estimated, but with a large uncertainty due to that in the determination of the low-energy cut-off of the ion velocity distribution function, f(E). Nevertheless, analysis of the time series f(t) shows the reconnection site to be on the dayside magnetopause, consistent with the pulsating cusp model, and the best estimate of the X-line location is 13 R(E) from the satellite. The ion precipitation is used to reconstruct the field-parallel part of the Cowley-D ion distribution function injected into the open low latitude boundary layer (LLBL) in the vicinity of the X-line. From this the Alfven speed, plasma density, magnetic field, parallel ion temperature, and flow velocity of the magnetosheath near the X-line can be derived.

  5. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    PubMed Central

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew

    2008-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755

  6. Space shuttle system program definition. Volume 4: Cost and schedule report

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The supporting cost and schedule data for the second half of the Space Shuttle System Phase B Extension Study is summarized. The major objective for this period was to address the cost/schedule differences affecting final selection of the HO orbiter space shuttle system. The contending options under study included the following booster launch configurations: (1) series burn ballistic recoverable booster (BRB), (2) parallel burn ballistic recoverable booster (BRB), (3) series burn solid rocket motors (SRM's), and (4) parallel burn solid rocket motors (SRM's). The implications of varying payload bay sizes for the orbiter, engine type for the ballistics recoverable booster, and SRM motors for the solid booster were examined.

  7. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarje, Abhinav; Jacobsen, Douglas W.; Williams, Samuel W.

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  8. Solar panel acceptance testing using a pulsed solar simulator

    NASA Technical Reports Server (NTRS)

    Hershey, T. L.

    1977-01-01

    Utilizing specific parameters as area of an individual cell, number in series and parallel, and established coefficient of current and voltage temperature dependence, a solar array irradiated with one solar constant at AMO and at ambient temperature can be characterized by a current-voltage curve for different intensities, temperatures, and even different configurations. Calibration techniques include: uniformity in area, depth and time, absolute and transfer irradiance standards, dynamic and functional check out procedures. Typical data are given for individual cell (2x2 cm) to complete flat solar array (5x5 feet) with 2660 cells and on cylindrical test items with up to 10,000 cells. The time and energy saving of such testing techniques are emphasized.

  9. Evaluation of NoSQL databases for DIRAC monitoring and beyond

    NASA Astrophysics Data System (ADS)

    Mathe, Z.; Casajus Ramo, A.; Stagni, F.; Tomassetti, L.

    2015-12-01

    Nowadays, many database systems are available but they may not be optimized for storing time series data. Monitoring DIRAC jobs would be better done using a database optimised for storing time series data. So far it was done using a MySQL database, which is not well suited for such an application. Therefore alternatives have been investigated. Choosing an appropriate database for storing huge amounts of time series data is not trivial as one must take into account different aspects such as manageability, scalability and extensibility. We compared the performance of Elasticsearch, OpenTSDB (based on HBase) and InfluxDB NoSQL databases, using the same set of machines and the same data. We also evaluated the effort required for maintaining them. Using the LHCb Workload Management System (WMS), based on DIRAC as a use case we set up a new monitoring system, in parallel with the current MySQL system, and we stored the same data into the databases under test. We evaluated Grafana (for OpenTSDB) and Kibana (for ElasticSearch) metrics and graph editors for creating dashboards, in order to have a clear picture on the usability of each candidate. In this paper we present the results of this study and the performance of the selected technology. We also give an outlook of other potential applications of NoSQL databases within the DIRAC project.

  10. Inferring the 1985-2014 impact of mobile phone use on selected brain cancer subtypes using Bayesian structural time series and synthetic controls.

    PubMed

    de Vocht, Frank

    2016-12-01

    Mobile phone use has been increasing rapidly in the past decades and, in parallel, so has the annual incidence of certain types of brain cancers. However, it remains unclear whether this correlation is coincidental or whether use of mobile phones may cause the development, promotion or progression of specific cancers. The 1985-2014 incidence of selected brain cancer subtypes in England were analyzed and compared to counterfactual 'synthetic control' timeseries. Annual 1985-2014 incidence of malignant glioma, glioblastoma multiforme, and malignant neoplasms of the temporal and parietal lobes in England were modelled based on population-level covariates using Bayesian structural time series models assuming 5,10 and 15year minimal latency periods. Post-latency counterfactual 'synthetic England' timeseries were nowcast based on covariate trends. The impact of mobile phone use was inferred from differences between measured and modelled time series. There is no evidence of an increase in malignant glioma, glioblastoma multiforme, or malignant neoplasms of the parietal lobe not predicted in the 'synthetic England' time series. Malignant neoplasms of the temporal lobe however, have increased faster than expected. A latency period of 10years reflected the earliest latency period when this was measurable and related to mobile phone penetration rates, and indicated an additional increase of 35% (95% Credible Interval 9%:59%) during 2005-2014; corresponding to an additional 188 (95%CI 48-324) cases annually. A causal factor, of which mobile phone use (and possibly other wireless equipment) is in agreement with the hypothesized temporal association, is related to an increased risk of developing malignant neoplasms in the temporal lobe. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  11. A concise evidence-based physical examination for diagnosis of acromioclavicular joint pathology: a systematic review.

    PubMed

    Krill, Michael K; Rosas, Samuel; Kwon, KiHyun; Dakkak, Andrew; Nwachukwu, Benedict U; McCormick, Frank

    2018-02-01

    The clinical examination of the shoulder joint is an undervalued diagnostic tool for evaluating acromioclavicular (AC) joint pathology. Applying evidence-based clinical tests enables providers to make an accurate diagnosis and minimize costly imaging procedures and potential delays in care. The purpose of this study was to create a decision tree analysis enabling simple and accurate diagnosis of AC joint pathology. A systematic review of the Medline, Ovid and Cochrane Review databases was performed to identify level one and two diagnostic studies evaluating clinical tests for AC joint pathology. Individual test characteristics were combined in series and in parallel to improve sensitivities and specificities. A secondary analysis utilized subjective pre-test probabilities to create a clinical decision tree algorithm with post-test probabilities. The optimal special test combination to screen and confirm AC joint pathology combined Paxinos sign and O'Brien's Test, with a specificity of 95.8% when performed in series; whereas, Paxinos sign and Hawkins-Kennedy Test demonstrated a sensitivity of 93.7% when performed in parallel. Paxinos sign and O'Brien's Test demonstrated the greatest positive likelihood ratio (2.71); whereas, Paxinos sign and Hawkins-Kennedy Test reported the lowest negative likelihood ratio (0.35). No combination of special tests performed in series or in parallel creates more than a small impact on post-test probabilities to screen or confirm AC joint pathology. Paxinos sign and O'Brien's Test is the only special test combination that has a small and sometimes important impact when used both in series and in parallel. Physical examination testing is not beneficial for diagnosis of AC joint pathology when pretest probability is unequivocal. In these instances, it is of benefit to proceed with procedural tests to evaluate AC joint pathology. Ultrasound-guided corticosteroid injections are diagnostic and therapeutic. An ultrasound-guided AC joint corticosteroid injection may be an appropriate new standard for treatment and surgical decision-making. II - Systematic Review.

  12. Real-time liquid-crystal atmosphere turbulence simulator with graphic processing unit.

    PubMed

    Hu, Lifa; Xuan, Li; Li, Dayu; Cao, Zhaoliang; Mu, Quanquan; Liu, Yonggang; Peng, Zenghui; Lu, Xinghai

    2009-04-27

    To generate time-evolving atmosphere turbulence in real time, a phase-generating method for our liquid-crystal (LC) atmosphere turbulence simulator (ATS) is derived based on the Fourier series (FS) method. A real matrix expression for generating turbulence phases is given and calculated with a graphic processing unit (GPU), the GeForce 8800 Ultra. A liquid crystal on silicon (LCOS) with 256x256 pixels is used as the turbulence simulator. The total time to generate a turbulence phase is about 7.8 ms for calculation and readout with the GPU. A parallel processing method of calculating and sending a picture to the LCOS is used to improve the simulating speed of our LC ATS. Therefore, the real-time turbulence phase-generation frequency of our LC ATS is up to 128 Hz. To our knowledge, it is the highest speed used to generate a turbulence phase in real time.

  13. Massively Multithreaded Maxflow for Image Segmentation on the Cray XMT-2

    PubMed Central

    Bokhari, Shahid H.; Çatalyürek, Ümit V.; Gurcan, Metin N.

    2014-01-01

    SUMMARY Image segmentation is a very important step in the computerized analysis of digital images. The maxflow mincut approach has been successfully used to obtain minimum energy segmentations of images in many fields. Classical algorithms for maxflow in networks do not directly lend themselves to efficient parallel implementations on contemporary parallel processors. We present the results of an implementation of Goldberg-Tarjan preflow-push algorithm on the Cray XMT-2 massively multithreaded supercomputer. This machine has hardware support for 128 threads in each physical processor, a uniformly accessible shared memory of up to 4 TB and hardware synchronization for each 64 bit word. It is thus well-suited to the parallelization of graph theoretic algorithms, such as preflow-push. We describe the implementation of the preflow-push code on the XMT-2 and present the results of timing experiments on a series of synthetically generated as well as real images. Our results indicate very good performance on large images and pave the way for practical applications of this machine architecture for image analysis in a production setting. The largest images we have run are 320002 pixels in size, which are well beyond the largest previously reported in the literature. PMID:25598745

  14. Techno-economic comparison of series hybrid, plug-in hybrid, fuel cell and regular cars

    NASA Astrophysics Data System (ADS)

    van Vliet, Oscar P. R.; Kruithof, Thomas; Turkenburg, Wim C.; Faaij, André P. C.

    We examine the competitiveness of series hybrid compared to fuel cell, parallel hybrid, and regular cars. We use public domain data to determine efficiency, fuel consumption, total costs of ownership and greenhouse gas emissions resulting from drivetrain choices. The series hybrid drivetrain can be seen both as an alternative to petrol, diesel and parallel hybrid cars, as well as an intermediate stage towards fully electric or fuel cell cars. We calculate the fuel consumption and costs of four diesel-fuelled series hybrid, four plug-in hybrid and four fuel cell car configurations, and compared these to three reference cars. We find that series hybrid cars may reduce fuel consumption by 34-47%, but cost €5000-12,000 more. Well-to-wheel greenhouse gas emissions may be reduced to 89-103 g CO 2 km -1 compared to reference petrol (163 g km -1) and diesel cars (156 g km -1). Series hybrid cars with wheel motors have lower weight and 7-21% lower fuel consumption than those with central electric motors. The fuel cell car remains uncompetitive even if production costs of fuel cells come down by 90%. Plug-in hybrid cars are competitive when driving large distances on electricity, and/or if cost of batteries come down substantially. Well-to-wheel greenhouse gas emissions may be reduced to 60-69 g CO 2 km -1.

  15. Photovoltaic power generation system free of bypass diodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lentine, Anthony L.; Okandan, Murat; Nielson, Gregory N.

    A photovoltaic power generation system that includes a solar panel that is free of bypass diodes is described herein. The solar panel includes a plurality of photovoltaic sub-modules, wherein at least two of photovoltaic sub-modules in the plurality of photovoltaic sub-modules are electrically connected in parallel. A photovoltaic sub-module includes a plurality of groups of electrically connected photovoltaic cells, wherein at least two of the groups are electrically connected in series. A photovoltaic group includes a plurality of strings of photovoltaic cells, wherein a string of photovoltaic cells comprises a plurality of photovoltaic cells electrically connected in series. The stringsmore » of photovoltaic cells are electrically connected in parallel, and the photovoltaic cells are microsystem-enabled photovoltaic cells.« less

  16. Solid-state energy storage module employing integrated interconnect board

    DOEpatents

    Rouillard, Jean; Comte, Christophe; Daigle, Dominik; Hagen, Ronald A.; Knudson, Orlin B.; Morin, Andre; Ranger, Michel; Ross, Guy; Rouillard, Roger; St-Germain, Philippe; Sudano, Anthony; Turgeon, Thomas A.

    2004-09-28

    An electrochemical energy storage device includes a number of solid-state thin-film electrochemical cells which are selectively interconnected in series or parallel through use of an integrated interconnect board. The interconnect board is typically disposed within a sealed housing which also houses the electrochemical cells, and includes a first contact and a second contact respectively coupled to first and second power terminals of the energy storage device. The interconnect board advantageously provides for selective series or parallel connectivity with the electrochemical cells, irrespective of electrochemical cell position within the housing. Fuses and various electrical and electro-mechanical devices, such as bypass, equalization, and communication devices for example, may also be mounted to the interconnect board and selectively connected to the electrochemical cells.

  17. Supporting Building Portfolio Investment and Policy Decision Making through an Integrated Building Utility Data Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, Azizan; Lasternas, Bertrand; Alschuler, Elena

    The American Recovery and Reinvestment Act stimulus funding of 2009 for smart grid projects resulted in the tripling of smart meters deployment. In 2012, the Green Button initiative provided utility customers with access to their real-time1 energy usage. The availability of finely granular data provides an enormous potential for energy data analytics and energy benchmarking. The sheer volume of time-series utility data from a large number of buildings also poses challenges in data collection, quality control, and database management for rigorous and meaningful analyses. In this paper, we will describe a building portfolio-level data analytics tool for operational optimization, businessmore » investment and policy assessment using 15-minute to monthly intervals utility data. The analytics tool is developed on top of the U.S. Department of Energy’s Standard Energy Efficiency Data (SEED) platform, an open source software application that manages energy performance data of large groups of buildings. To support the significantly large volume of granular interval data, we integrated a parallel time-series database to the existing relational database. The time-series database improves on the current utility data input, focusing on real-time data collection, storage, analytics and data quality control. The fully integrated data platform supports APIs for utility apps development by third party software developers. These apps will provide actionable intelligence for building owners and facilities managers. Unlike a commercial system, this platform is an open source platform funded by the U.S. Government, accessible to the public, researchers and other developers, to support initiatives in reducing building energy consumption.« less

  18. Accelerating finite-rate chemical kinetics with coprocessors: Comparing vectorization methods on GPUs, MICs, and CPUs

    NASA Astrophysics Data System (ADS)

    Stone, Christopher P.; Alferman, Andrew T.; Niemeyer, Kyle E.

    2018-05-01

    Accurate and efficient methods for solving stiff ordinary differential equations (ODEs) are a critical component of turbulent combustion simulations with finite-rate chemistry. The ODEs governing the chemical kinetics at each mesh point are decoupled by operator-splitting allowing each to be solved concurrently. An efficient ODE solver must then take into account the available thread and instruction-level parallelism of the underlying hardware, especially on many-core coprocessors, as well as the numerical efficiency. A stiff Rosenbrock and a nonstiff Runge-Kutta ODE solver are both implemented using the single instruction, multiple thread (SIMT) and single instruction, multiple data (SIMD) paradigms within OpenCL. Both methods solve multiple ODEs concurrently within the same instruction stream. The performance of these parallel implementations was measured on three chemical kinetic models of increasing size across several multicore and many-core platforms. Two separate benchmarks were conducted to clearly determine any performance advantage offered by either method. The first benchmark measured the run-time of evaluating the right-hand-side source terms in parallel and the second benchmark integrated a series of constant-pressure, homogeneous reactors using the Rosenbrock and Runge-Kutta solvers. The right-hand-side evaluations with SIMD parallelism on the host multicore Xeon CPU and many-core Xeon Phi co-processor performed approximately three times faster than the baseline multithreaded C++ code. The SIMT parallel model on the host and Phi was 13%-35% slower than the baseline while the SIMT model on the NVIDIA Kepler GPU provided approximately the same performance as the SIMD model on the Phi. The runtimes for both ODE solvers decreased significantly with the SIMD implementations on the host CPU (2.5-2.7 ×) and Xeon Phi coprocessor (4.7-4.9 ×) compared to the baseline parallel code. The SIMT implementations on the GPU ran 1.5-1.6 times faster than the baseline multithreaded CPU code; however, this was significantly slower than the SIMD versions on the host CPU or the Xeon Phi. The performance difference between the three platforms was attributed to thread divergence caused by the adaptive step-sizes within the ODE integrators. Analysis showed that the wider vector width of the GPU incurs a higher level of divergence than the narrower Sandy Bridge or Xeon Phi. The significant performance improvement provided by the SIMD parallel strategy motivates further research into more ODE solver methods that are both SIMD-friendly and computationally efficient.

  19. Electrical Circuits and Water Analogies

    ERIC Educational Resources Information Center

    Smith, Frederick A.; Wilson, Jerry D.

    1974-01-01

    Briefly describes water analogies for electrical circuits and presents plans for the construction of apparatus to demonstrate these analogies. Demonstrations include series circuits, parallel circuits, and capacitors. (GS)

  20. Parallel programming of industrial applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heroux, M; Koniges, A; Simon, H

    1998-07-21

    In the introductory material, we overview the typical MPP environment for real application computing and the special tools available such as parallel debuggers and performance analyzers. Next, we draw from a series of real applications codes and discuss the specific challenges and problems that are encountered in parallelizing these individual applications. The application areas drawn from include biomedical sciences, materials processing and design, plasma and fluid dynamics, and others. We show how it was possible to get a particular application to run efficiently and what steps were necessary. Finally we end with a summary of the lessons learned from thesemore » applications and predictions for the future of industrial parallel computing. This tutorial is based on material from a forthcoming book entitled: "Industrial Strength Parallel Computing" to be published by Morgan Kaufmann Publishers (ISBN l-55860-54).« less

  1. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  2. Quinine-induced tinnitus in rats.

    PubMed

    Jastreboff, P J; Brennan, J F; Sasaki, C T

    1991-10-01

    Quinine ingestion reportedly induces tinnitus in humans. To expand our salicylate-based animal model of tinnitus, a series of conditioned suppression experiments was performed on 54 male-pigmented rats using quinine injections to induce tinnitus. Quinine induced changes in both the extent of suppression and recovery of licking, which followed a pattern that paralleled those produced after salicylate injections, and which may be interpreted as the result of tinnitus perception in animals. These changes depended on the dose and time schedule of quinine administration. Additionally, the calcium channel blocker, nimodipine, abolished the quinine-induced effect in a dose-dependent manner.

  3. Parallel medicinal chemistry approaches to selective HDAC1/HDAC2 inhibitor (SHI-1:2) optimization.

    PubMed

    Kattar, Solomon D; Surdi, Laura M; Zabierek, Anna; Methot, Joey L; Middleton, Richard E; Hughes, Bethany; Szewczak, Alexander A; Dahlberg, William K; Kral, Astrid M; Ozerova, Nicole; Fleming, Judith C; Wang, Hongmei; Secrist, Paul; Harsch, Andreas; Hamill, Julie E; Cruz, Jonathan C; Kenific, Candia M; Chenard, Melissa; Miller, Thomas A; Berk, Scott C; Tempest, Paul

    2009-02-15

    The successful application of both solid and solution phase library synthesis, combined with tight integration into the medicinal chemistry effort, resulted in the efficient optimization of a novel structural series of selective HDAC1/HDAC2 inhibitors by the MRL-Boston Parallel Medicinal Chemistry group. An initial lead from a small parallel library was found to be potent and selective in biochemical assays. Advanced compounds were the culmination of iterative library design and possess excellent biochemical and cellular potency, as well as acceptable PK and efficacy in animal models.

  4. A generic concept to overcome bandgap limitations for designing highly efficient multi-junction photovoltaic cells

    PubMed Central

    Guo, Fei; Li, Ning; Fecher, Frank W.; Gasparini, Nicola; Quiroz, Cesar Omar Ramirez; Bronnbauer, Carina; Hou, Yi; Radmilović, Vuk V.; Radmilović, Velimir R.; Spiecker, Erdmann; Forberich, Karen; Brabec, Christoph J.

    2015-01-01

    The multi-junction concept is the most relevant approach to overcome the Shockley–Queisser limit for single-junction photovoltaic cells. The record efficiencies of several types of solar technologies are held by series-connected tandem configurations. However, the stringent current-matching criterion presents primarily a material challenge and permanently requires developing and processing novel semiconductors with desired bandgaps and thicknesses. Here we report a generic concept to alleviate this limitation. By integrating series- and parallel-interconnections into a triple-junction configuration, we find significantly relaxed material selection and current-matching constraints. To illustrate the versatile applicability of the proposed triple-junction concept, organic and organic-inorganic hybrid triple-junction solar cells are constructed by printing methods. High fill factors up to 68% without resistive losses are achieved for both organic and hybrid triple-junction devices. Series/parallel triple-junction cells with organic, as well as perovskite-based subcells may become a key technology to further advance the efficiency roadmap of the existing photovoltaic technologies. PMID:26177808

  5. A generic concept to overcome bandgap limitations for designing highly efficient multi-junction photovoltaic cells.

    PubMed

    Guo, Fei; Li, Ning; Fecher, Frank W; Gasparini, Nicola; Ramirez Quiroz, Cesar Omar; Bronnbauer, Carina; Hou, Yi; Radmilović, Vuk V; Radmilović, Velimir R; Spiecker, Erdmann; Forberich, Karen; Brabec, Christoph J

    2015-07-16

    The multi-junction concept is the most relevant approach to overcome the Shockley-Queisser limit for single-junction photovoltaic cells. The record efficiencies of several types of solar technologies are held by series-connected tandem configurations. However, the stringent current-matching criterion presents primarily a material challenge and permanently requires developing and processing novel semiconductors with desired bandgaps and thicknesses. Here we report a generic concept to alleviate this limitation. By integrating series- and parallel-interconnections into a triple-junction configuration, we find significantly relaxed material selection and current-matching constraints. To illustrate the versatile applicability of the proposed triple-junction concept, organic and organic-inorganic hybrid triple-junction solar cells are constructed by printing methods. High fill factors up to 68% without resistive losses are achieved for both organic and hybrid triple-junction devices. Series/parallel triple-junction cells with organic, as well as perovskite-based subcells may become a key technology to further advance the efficiency roadmap of the existing photovoltaic technologies.

  6. Hazards Due to Overdischarge in Lithium-ion Cylindrical Cells in Multi-cell Configurations

    NASA Technical Reports Server (NTRS)

    Jeevarajan, Judith; Strangways, Brad; Nelson, Tim

    2010-01-01

    Lithium-ion cells in the cylindrical Commercial-off-the-shelf 18650 design format were used to study the hazards associated with overdischarge. The cells in series or in parallel configurations were subjected to different conditions of overdischarge. The cells in parallel configurations were all overdischarged to 2.0 V for 75 cycles with one cell removed at 25 cycles to study the health of the cell. The cells in series were designed to be in an unbalanced configuration by discharging one cell in each series configuration before the start of test. The discharge consisted of removing a pre-determined capacity from the cell. This ranged from 50 to 150 mAh removal. The cells were discharged down to a predetermined end-of-discharge voltage cutoff which allowed the cell with lower capacity to go into an overdischarge mode. The cell modules that survived the 75 cycles were subjected to one overvoltage test to 4.4 V/cell.

  7. Solar array construction

    NASA Technical Reports Server (NTRS)

    Crouthamel, Marvin S. (Inventor); Coyle, Peter J. (Inventor)

    1982-01-01

    An interconnect tab on each cell of a first set of circular solar cells connects that cell in series with an adjacent cell in the set. This set of cells is arranged in alternate columns and rows of an array and a second set of similar cells is arranged in the remaining alternate columns and rows of the array. Three interconnect tabs on each solar cell of the said second set are employed to connect the cells of the second set to one another, in series and to connect the cells of the second set to those of the first set in parallel. Some tabs (making parallel connections) connect the same surface regions of adjacent cells to one another and others (making series connections) connect a surface region of one cell to the opposite surface region of an adjacent cell; however, the tabs are so positioned that the array may be easily assembled by depositing the cells in a certain sequence and in proper orientation.

  8. Preliminary Observing System Simulation Experiments for Doppler Wind Lidars Deployed on the International Space Station

    NASA Technical Reports Server (NTRS)

    Kemp, E.; Jacob, J.; Rosenberg, R.; Jusem, J. C.; Emmitt, G. D.; Wood, S.; Greco, L. P.; Riishojgaard, L. P.; Masutani, M.; Ma, Z.; hide

    2013-01-01

    NASA Goddard Space Flight Center's Software Systems Support Office (SSSO) is participating in a multi-agency study of the impact of assimilating Doppler wind lidar observations on numerical weather prediction. Funded by NASA's Earth Science Technology Office, SSSO has worked with Simpson Weather Associates to produce time series of synthetic lidar observations mimicking the OAWL and WISSCR lidar instruments deployed on the International Space Station. In addition, SSSO has worked to assimilate a portion of these observations those drawn from the NASA fvGCM Nature Run into the NASA GEOS-DAS global weather prediction system in a series of Observing System Simulation Experiments (OSSEs). These OSSEs will complement parallel OSSEs prepared by the Joint Center for Satellite Data Assimilation and by NOAA's Atlantic Oceanographic and Meteorological Laboratory. In this talk, we will describe our procedure and provide available OSSE results.

  9. A hybrid Jaya algorithm for reliability-redundancy allocation problems

    NASA Astrophysics Data System (ADS)

    Ghavidel, Sahand; Azizivahed, Ali; Li, Li

    2018-04-01

    This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching-learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability-redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series-parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30-100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results.

  10. Evaluation of slice accelerations using multiband echo planar imaging at 3 Tesla

    PubMed Central

    Xu, Junqian; Moeller, Steen; Auerbach, Edward J.; Strupp, John; Smith, Stephen M.; Feinberg, David A.; Yacoub, Essa; Uğurbil, Kâmil

    2013-01-01

    We evaluate residual aliasing among simultaneously excited and acquired slices in slice accelerated multiband (MB) echo planar imaging (EPI). No in-plane accelerations were used in order to maximize and evaluate achievable slice acceleration factors at 3 Tesla. We propose a novel leakage (L-) factor to quantify the effects of signal leakage between simultaneously acquired slices. With a standard 32-channel receiver coil at 3 Tesla, we demonstrate that slice acceleration factors of up to eight (MB = 8) with blipped controlled aliasing in parallel imaging (CAIPI), in the absence of in-plane accelerations, can be used routinely with acceptable image quality and integrity for whole brain imaging. Spectral analyses of single-shot fMRI time series demonstrate that temporal fluctuations due to both neuronal and physiological sources were distinguishable and comparable up to slice-acceleration factors of nine (MB = 9). The increased temporal efficiency could be employed to achieve, within a given acquisition period, higher spatial resolution, increased fMRI statistical power, multiple TEs, faster sampling of temporal events in a resting state fMRI time series, increased sampling of q-space in diffusion imaging, or more quiet time during a scan. PMID:23899722

  11. Photovoltaic cell array

    NASA Technical Reports Server (NTRS)

    Eliason, J. T. (Inventor)

    1976-01-01

    A photovoltaic cell array consisting of parallel columns of silicon filaments is described. Each fiber is doped to produce an inner region of one polarity type and an outer region of an opposite polarity type to thereby form a continuous radial semi conductor junction. Spaced rows of electrical contacts alternately connect to the inner and outer regions to provide a plurality of electrical outputs which may be combined in parallel or in series.

  12. Military Curricula for Vocational & Technical Education. Basic Electricity and Electronics Individualized Learning System. CANTRAC A-100-0010. Module Six: Parallel Circuits. Study Booklet.

    ERIC Educational Resources Information Center

    Chief of Naval Education and Training Support, Pensacola, FL.

    This individualized learning module on parallel circuits is one in a series of modules for a course in basic electricity and electronics. The course is one of a number of military-developed curriculum packages selected for adaptation to vocational instructional and curriculum development in a civilian setting. Four lessons are included in the…

  13. [Activities of Bay Area Research Corporation

    NASA Technical Reports Server (NTRS)

    2003-01-01

    During the final year of this effort the HALFSHEL code was converted to work on a fast single processor workstation from it s parallel configuration. This was done because NASA Ames NAS facility stopped supporting space science and we no longer had access to parallel computer time. The single processor version of HALFSHEL was upgraded to address low density cells by using a a 3-D SOR solver to solve the equation Delta central dot E = 0. We then upgraded the ionospheric load packages to provide a multiple species load of the ionosphere out to 1.4 Rm. With these new tools we began to perform a series of simulations to address the major topic of this research effort; determining the loss rate of O(sup +) and O2(sup +) from Mars. The simulations used the nominal Parker spiral field and in one case used a field perpendicular to the solar wind flow. The simulations were performed for three different solar EUV fluxes consistent with the different solar evolutionary states believed to exist before today. The 1 EUV case is the nominal flux of today. The 3 EUV flux is called Epoch 2 and has three times the flux of todays. The 6 EUV case is Epoch 3 and has 6 times the EUV flux of today.

  14. Role of poloidal flows on the particle confinement time in a simple toroidal device : an experimental study

    NASA Astrophysics Data System (ADS)

    Kumar, Umesh; Ganesh, R.; Saxena, Y. C.; Thatipamula, Shekar G.; Sathyanarayana, K.; Raju, Daniel

    2017-10-01

    In magnetized toroidal devices without rotational transform also known as Simple Magnetized Torus (SMT). The device BETA at the IPR is one such SMT with a major radius of 45 cm, minor radius of 15 cm and a maximum toroidal field of 0.1 Tesla. Understanding confinement in such helical configurations is an important problem both for fundamental plasma physics and for Tokamak edge physics. In a recent series of experiments it was demonstrated experimentally that the mean plasma profiles, fluctuation, flow and turbulence depend crucially on the parallel connection length, which was controlled by external vertical field. In the present work, we report our experimental findings, wherein we measure the particle confinement time for hot cathode discharge and ECRH discharge, with variation in parallel connection length. As ECRH plasma don't have mean electric field and hence the poloidal rotation of plasma is absent. However, in hot cathode discharge, there exist strong poloidal flows due to mean electric field. An experimental comparison of these along with theoretical model with variation in connection length will be presented. We also present experimental measurements of variation of plasma confinement time with mass as well as the ratio of vertical field to toroidal magnetic field.

  15. FX-87 performance measurements: data-flow implementation. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammel, R.T.; Gifford, D.K.

    1988-11-01

    This report documents a series of experiments performed to explore the thesis that the FX-87 effect system permits a compiler to schedule imperative programs (i.e., programs that may contain side-effects) for execution on a parallel computer. The authors analyze how much the FX-87 static effect system can improve the execution times of five benchmark programs on a parallel graph interpreter. Three of their benchmark programs do not use side-effects (factorial, fibonacci, and polynomial division) and thus did not have any effect-induced constraints. Their FX-87 performance was comparable to their performance in a purely functional language. Two of the benchmark programsmore » use side effects (DNA sequence matching and Scheme interpretation) and the compiler was able to use effect information to reduce their execution times by factors of 1.7 to 5.4 when compared with sequential execution times. These results support the thesis that a static effect system is a powerful tool for compilation to multiprocessor computers. However, the graph interpreter we used was based on unrealistic assumptions, and thus our results may not accurately reflect the performance of a practical FX-87 implementation. The results also suggest that conventional loop analysis would complement the FX-87 effect system« less

  16. Bookshelf faulting and transform motion between rift segments of the Northern Volcanic Zone, Iceland

    NASA Astrophysics Data System (ADS)

    Green, R. G.; White, R. S.; Greenfield, T. S.

    2013-12-01

    Plate spreading is segmented on length scales from 10 - 1,000 kilometres. Where spreading segments are offset, extensional motion has to transfer from one segment to another. In classical plate tectonics, mid-ocean ridge spreading centres are offset by transform faults, but smaller 'non-transform' offsets exist between slightly overlapping spreading centres which accommodate shear by a variety of geometries. In Iceland the mid-Atlantic Ridge is raised above sea level by the Iceland mantle plume, and is divided into a series of segments 20-150 km long. Using microseismicity recorded by a temporary array of 26 three-component seismometers during 2009-2012 we map bookshelf faulting between the offset Askja and Kverkfjöll rift segments in north Iceland. The micro-earthquakes delineate a series of sub-parallel strike-slip faults. Well constrained fault plane solutions show consistent left-lateral motion on fault planes aligned closely with epicentral trends. The shear couple across the transform zone causes left-lateral slip on the series of strike-slip faults sub-parallel to the rift fabric, causing clockwise rotations about a vertical axis of the intervening rigid crustal blocks. This accommodates the overall right-lateral transform motion in the relay zone between the two overlapping volcanic rift segments. The faults probably reactivated crustal weaknesses along the dyke intrusion fabric (parallel to the rift axis) and have since rotated ˜15° clockwise into their present orientation. The reactivation of pre-existing rift-parallel weaknesses is in contrast with mid-ocean ridge transform faults, and is an important illustration of a 'non-transform' offset accommodating shear between overlapping spreading segments.

  17. Analysis of Parallel Burn, No-Crossfeed TSTO RLV Architectures and Comparison to Parallel Burn with Crossfeed and Series Burn Architectures

    NASA Technical Reports Server (NTRS)

    Smith, Garrett; Philips, Alan

    2003-01-01

    Three dominant Two Stage To Orbit (TSTO) class architectures were studied: Series Burn (SB), Parallel Bum with crossfeed (PBw/cf), and Parallel Burn, no-crossfeed (PBncf). The study goal was to determine what factors uniquely affect PBncf architectures, how each of these factors interact, and to determine from a performance perspective whether a PBncf vehicle could be competitive with a PBw/cf or a SB vehicle using equivalent technology and assumptions. In all cases, performance was evaluated on a relative basis for a fixed payload and mission by comparing gross and dry vehicle masses of a closed vehicle. Propellant combinations studied were LOX: LH2 propelled booster and orbiter (HH) and LOX: Kerosene booster with LOX: LH2 orbiter (KH). The study observations were: 1) A PBncf orbiter should be throttled as deeply as possible after launch until the staging point. 2) A PBncf TSTO architecture is feasible for systems that stage at mach 7. 2a) HH architectures can achieve a mass growth relative to PBw/cf of <20%. 2b) KH architectures can achieve a mass growth relative to Series Burn of <20%. 3) Center of gravity (CG) control will be a major issue for a PBncf vehicle, due to the low orbiter specific thrust to weight ratio and to the position of the orbiter required to align the nozzle heights at liftoff. 4) Thrust to weight ratios of 1.3 at liftoff and between 1.0 and 0.9 when staging at mach 7 appear to be close to ideal for PBncf vehicles. 5) Performance for HH vehicles was better when staged at mach 7 instead of mach 5. The study suggests possible methods to maximize performance of PBncf vehicle architectures in order to meet mission design requirements.

  18. Nautical Charts: Another Dimension in Developing Map Skills. Instructional Activities Series IA/S-11.

    ERIC Educational Resources Information Center

    McCallum, W. F.; Botly, D. H.

    These activities are part of a series of 17 teacher-developed instructional activities for geography at the secondary-grade level described in SO 009 140. In the activities students develop map skills by learning about and using nautical charts. The first activity involves students in using parallel rulers and a compass rose to find their…

  19. Full Wave Analysis of RF Signal Attenuation in a Lossy Cave using a High Order Time Domain Vector Finite Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pingenot, J; Rieben, R; White, D

    2004-12-06

    We present a computational study of signal propagation and attenuation of a 200 MHz dipole antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The simulation is performed for a series of random meshes in order to generate statistical data for the propagation and attenuation properties of the cave environment. Results for the power spectral density and phase ofmore » the electric field vector components are presented and discussed.« less

  20. Large Spatial Scale Ground Displacement Mapping through the P-SBAS Processing of Sentinel-1 Data on a Cloud Computing Environment

    NASA Astrophysics Data System (ADS)

    Casu, F.; Bonano, M.; de Luca, C.; Lanari, R.; Manunta, M.; Manzo, M.; Zinno, I.

    2017-12-01

    Since its launch in 2014, the Sentinel-1 (S1) constellation has played a key role on SAR data availability and dissemination all over the World. Indeed, the free and open access data policy adopted by the European Copernicus program together with the global coverage acquisition strategy, make the Sentinel constellation as a game changer in the Earth Observation scenario. Being the SAR data become ubiquitous, the technological and scientific challenge is focused on maximizing the exploitation of such huge data flow. In this direction, the use of innovative processing algorithms and distributed computing infrastructures, such as the Cloud Computing platforms, can play a crucial role. In this work we present a Cloud Computing solution for the advanced interferometric (DInSAR) processing chain based on the Parallel SBAS (P-SBAS) approach, aimed at processing S1 Interferometric Wide Swath (IWS) data for the generation of large spatial scale deformation time series in efficient, automatic and systematic way. Such a DInSAR chain ingests Sentinel 1 SLC images and carries out several processing steps, to finally compute deformation time series and mean deformation velocity maps. Different parallel strategies have been designed ad hoc for each processing step of the P-SBAS S1 chain, encompassing both multi-core and multi-node programming techniques, in order to maximize the computational efficiency achieved within a Cloud Computing environment and cut down the relevant processing times. The presented P-SBAS S1 processing chain has been implemented on the Amazon Web Services platform and a thorough analysis of the attained parallel performances has been performed to identify and overcome the major bottlenecks to the scalability. The presented approach is used to perform national-scale DInSAR analyses over Italy, involving the processing of more than 3000 S1 IWS images acquired from both ascending and descending orbits. Such an experiment confirms the big advantage of exploiting large computational and storage resources of Cloud Computing platforms for large scale DInSAR analysis. The presented Cloud Computing P-SBAS processing chain can be a precious tool in the perspective of developing operational services disposable for the EO scientific community related to hazard monitoring and risk prevention and mitigation.

  1. Crop classification and mapping based on Sentinel missions data in cloud environment

    NASA Astrophysics Data System (ADS)

    Lavreniuk, M. S.; Kussul, N.; Shelestov, A.; Vasiliev, V.

    2017-12-01

    Availability of high resolution satellite imagery (Sentinel-1/2/3, Landsat) over large territories opens new opportunities in agricultural monitoring. In particular, it becomes feasible to solve crop classification and crop mapping task at country and regional scale using time series of heterogenous satellite imagery. But in this case, we face with the problem of Big Data. Dealing with time series of high resolution (10 m) multispectral imagery we need to download huge volumes of data and then process them. The solution is to move "processing chain" closer to data itself to drastically shorten time for data transfer. One more advantage of such approach is the possibility to parallelize data processing workflow and efficiently implement machine learning algorithms. This could be done with cloud platform where Sentinel imagery are stored. In this study, we investigate usability and efficiency of two different cloud platforms Amazon and Google for crop classification and crop mapping problems. Two pilot areas were investigated - Ukraine and England. Google provides user friendly environment Google Earth Engine for Earth observation applications with a lot of data processing and machine learning tools already deployed. At the same time with Amazon one gets much more flexibility in implementation of his own workflow. Detailed analysis of pros and cons will be done in the presentation.

  2. Comparison of the analgesic efficacy of oral ketorolac versus intramuscular tramadol after third molar surgery: A parallel, double-blind, randomized, placebo-controlled clinical trial.

    PubMed

    Isiordia-Espinoza, M-A; Pozos-Guillen, A; Martinez-Rider, R; Perez-Urizar, J

    2016-09-01

    Preemptive analgesia is considered an alternative for treating the postsurgical pain of third molar removal. The aim of this study was to evaluate the preemptive analgesic efficacy of oral ketorolac versus intramuscular tramadol after a mandibular third molar surgery. A parallel, double-blind, randomized, placebo-controlled clinical trial was carried out. Thirty patients were randomized into two treatment groups using a series of random numbers: Group A, oral ketorolac 10 mg plus intramuscular placebo (1 mL saline solution); or Group B, oral placebo (similar tablet to oral ketorolac) plus intramuscular tramadol 50 mg diluted in 1 mL saline solution. These treatments were given 30 min before the surgery. We evaluated the time of first analgesic rescue medication, pain intensity, total analgesic consumption and adverse effects. Patients taking oral ketorolac had longer time of analgesic covering and less postoperative pain when compared with patients receiving intramuscular tramadol. According to the VAS and UAC results, this study suggests that 10 mg of oral ketorolac had superior analgesic effect than 50 mg of tramadol when administered before a mandibular third molar surgery.

  3. Simplified and quick electrical modeling for dye sensitized solar cells: An experimental and theoretical investigation

    NASA Astrophysics Data System (ADS)

    de Andrade, Rocelito Lopes; de Oliveira, Matheus Costa; Kohlrausch, Emerson Cristofer; Santos, Marcos José Leite

    2018-05-01

    This work presents a new and simple method for determining IPH (current source dependent on luminance), I0 (reverse saturation current), n (ideality factor), RP and RS, (parallel and series resistance) to build an electrical model for dye sensitized solar cells (DSSCs). The electrical circuit parameters used in the simulation and to generate theoretical curves for the single diode electrical model were extracted from I-V curves of assembled DSSCs. Model validation was performed by assembling five different types of DSSCs and evaluating the following parameters: effect of a TiO2 blocking/adhesive layer, thickness of the TiO2 layer and the presence of a light scattering layer. In addition, irradiance, temperature, series and parallel resistance, ideality factor and reverse saturation current were simulated.

  4. Weddings, Electric Circuits, and the Corner Grocery Store

    NASA Astrophysics Data System (ADS)

    Fischer, Mark

    2001-10-01

    When discussing electric circuits in most physics and physical science courses, students often struggle with the rules for adding resistors wired in series and in parallel. Traditionally, these rules are motivated by analogies to water pumped through pipes, analogies that are at least as unfamiliar to most students as electricity itself. The activities presented here model the behavior of series and parallel electric circuits by wedding receiving lines and grocery store checkout lanes respectively, two circumstances with which most students have had experience. The activity is easy to perform and can be done qualitatively or quantitatively, and can even be augmented to model more sophisticated circuits. Thus, the activity described is appropriate for basic physical science courses as well as majors courses and will engage students from middle school through college.

  5. Solid-state energy storage module employing integrated interconnect board

    DOEpatents

    Rouillard, Jean; Comte, Christophe; Daigle, Dominik; Hagen, Ronald A.; Knudson, Orlin B.; Morin, Andre; Ranger, Michel; Ross, Guy; Rouillard, Roger; St-Germain, Philippe; Sudano, Anthony; Turgeon, Thomas A.

    2003-11-04

    The present invention is directed to an improved electrochemical energy storage device. The electrochemical energy storage device includes a number of solid-state, thin-film electrochemical cells which are selectively interconnected in series or parallel through use of an integrated interconnect board. The interconnect board is typically disposed within a sealed housing which also houses the electrochemical cells, and includes a first contact and a second contact respectively coupled to first and second power terminals of the energy storage device. The interconnect board advantageously provides for selective series or parallel connectivity with the electrochemical cells, irrespective of electrochemical cell position within the housing. Fuses and various electrical and electromechanical devices, such as bypass, equalization, and communication devices for example, may also be mounted to the interconnect board and selectively connected to the electrochemical cells.

  6. FEM analysis of an single stator dual PM rotors axial synchronous machine

    NASA Astrophysics Data System (ADS)

    Tutelea, L. N.; Deaconu, S. I.; Popa, G. N.

    2017-01-01

    The actual e - continuously variable transmission (e-CVT) solution for the parallel Hybrid Electric Vehicle (HEV) requires two electric machines, two inverters, and a planetary gear. A distinct electric generator and a propulsion electric motor, both with full power converters, are typical for a series HEV. In an effort to simplify the planetary-geared e-CVT for the parallel HEV or the series HEV we hereby propose to replace the basically two electric machines and their two power converters by a single, axial-air-gap, electric machine central stator, fed from a single PWM converter with dual frequency voltage output and two independent PM rotors. The proposed topologies, the magneto-motive force analysis and quasi 3D-FEM analysis are the core of the paper.

  7. Comparing multi-module connections in membrane chromatography scale-up.

    PubMed

    Yu, Zhou; Karkaria, Tishtar; Espina, Marianela; Hunjun, Manjeet; Surendran, Abera; Luu, Tina; Telychko, Julia; Yang, Yan-Ping

    2015-07-20

    Membrane chromatography is increasingly used for protein purification in the biopharmaceutical industry. Membrane adsorbers are often pre-assembled by manufacturers as ready-to-use modules. In large-scale protein manufacturing settings, the use of multiple membrane modules for a single batch is often required due to the large quantity of feed material. The question as to how multiple modules can be connected to achieve optimum separation and productivity has been previously approached using model proteins and mass transport theories. In this study, we compare the performance of multiple membrane modules in series and in parallel in the production of a protein antigen. Series connection was shown to provide superior separation compared to parallel connection in the context of competitive adsorption. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Automated Solar Module Assembly Line

    NASA Technical Reports Server (NTRS)

    Bycer, M.

    1979-01-01

    The gathering of information that led to the design approach of the machine, and a summary of the findings in the areas of study along with a description of each station of the machine are discussed. The machine is a cell stringing and string applique machine which is flexible in design, capable of handling a variety of cells and assembling strings of cells which can then be placed in a matrix up to 4 ft x 2 ft. in series or parallel arrangement. The target machine cycle is to be 5 seconds per cell. This machine is primarily adapted to 100 MM round cells with one or two tabs between cells. It places finished strings of up to twelve cells in a matrix of up to six such strings arranged in series or in parallel.

  9. Device for balancing parallel strings

    DOEpatents

    Mashikian, Matthew S.

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  10. Effects of Wii balance board exercises on balance after posterior cruciate ligament reconstruction.

    PubMed

    Puh, Urška; Majcen, Nia; Hlebš, Sonja; Rugelj, Darja

    2014-05-01

    To establish the effects of training on Wii balance board (WBB) after posterior cruciate ligament (PCL) reconstruction on balance. Included patient injured her posterior cruciate ligament 22 months prior to the study. Training on WBB was performed 4 weeks, 6 times per week, 30-45 min per day. Center of pressure (CoP) sway during parallel and one-leg stance, and body weight distribution in parallel stance were measured. Additionally, measurements of joint range of motion and limb circumferences were taken before and after training. After training, the body weight was almost equally distributed on both legs. Decrease in CoP sway was most significant for one-leg stance with each leg on compliant surface with eyes open and closed. The knee joint range of motion increased and limb circumferences decreased. According to the results of this single case report, we might recommend the use of WBB for balance training after PCL reconstruction. Case series with no comparison group, Level IV.

  11. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.

    PubMed

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

  12. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience

    PubMed Central

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992

  13. Characterization of wastewater treatment by two microbial fuel cells in continuous flow operation.

    PubMed

    Kubota, Keiichi; Watanabe, Tomohide; Yamaguchi, Takashi; Syutsubo, Kazuaki

    2016-01-01

    A two serially connected single-chamber microbial fuel cell (MFC) was applied to the treatment of diluted molasses wastewater in a continuous operation mode. In addition, the effect of series and parallel connection between the anodes and the cathode on power generation was investigated experimentally. The two serially connected MFC process achieved 79.8% of chemical oxygen demand removal and 11.6% of Coulombic efficiency when the hydraulic retention time of the whole process was 26 h. The power densities were 0.54, 0.34 and 0.40 W m(-3) when electrodes were in individual connection, serial connection and parallel connection modes, respectively. A high open circuit voltage was obtained in the serial connection. Power density decreased at low organic loading rates (OLR) due to the shortage of organic matter. Power generation efficiency tended to decrease as a result of enhancement of methane fermentation at high OLRs. Therefore, high power density and efficiency can be achieved by using a suitable OLR range.

  14. A superconducting direct-current limiter with a power of up to 8 MVA

    NASA Astrophysics Data System (ADS)

    Fisher, L. M.; Alferov, D. F.; Akhmetgareev, M. R.; Budovskii, A. I.; Evsin, D. V.; Voloshin, I. F.; Kalinov, A. V.

    2016-12-01

    A resistive switching superconducting fault current limiter (SFCL) for DC networks with a nominal voltage of 3.5 kV and a nominal current of 2 kA was developed, produced, and tested. The SFCL has two main units—an assembly of superconducting modules and a high-speed vacuum circuit breaker. The assembly of superconducting modules consists of nine (3 × 3) parallel-series connected modules. Each module contains four parallel-connected 2G high-temperature superconducting (HTS) tapes. The results of SFCL tests in the short-circuit emulation mode with a maximum current rise rate of 1300 A/ms are presented. The SFCL is capable of limiting the current at a level of 7 kA and break it 8 ms after the current-limiting mode begins. The average temperature of HTS tapes during the current-limiting mode increases to 210 K. After the current is interrupted, the superconductivity recovery time does not exceed 1 s.

  15. Opus: A Coordination Language for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Haines, Matthew; Mehrotra, Piyush; Zima, Hans; vanRosendale, John

    1997-01-01

    Data parallel languages, such as High Performance fortran, can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.

  16. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Mattmann, C. A.; Waliser, D. E.; Kim, J.; Loikith, P.; Lee, H.; McGibbney, L. J.; Whitehall, K. D.

    2014-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark. Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk, and makes iterative algorithms feasible. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 100 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning (ML) based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. The goals of SciSpark are to: (1) Decrease the time to compute comparison statistics and plots from minutes to seconds; (2) Allow for interactive exploration of time-series properties over seasons and years; (3) Decrease the time for satellite data ingestion into RCMES to hours; (4) Allow for Level-2 comparisons with higher-order statistics or PDF's in minutes to hours; and (5) Move RCMES into a near real time decision-making platform. We will report on: the architecture and design of SciSpark, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning (sharding) of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.

  17. Solution-processed parallel tandem polymer solar cells using silver nanowires as intermediate electrode.

    PubMed

    Guo, Fei; Kubis, Peter; Li, Ning; Przybilla, Thomas; Matt, Gebhard; Stubhan, Tobias; Ameri, Tayebeh; Butz, Benjamin; Spiecker, Erdmann; Forberich, Karen; Brabec, Christoph J

    2014-12-23

    Tandem architecture is the most relevant concept to overcome the efficiency limit of single-junction photovoltaic solar cells. Series-connected tandem polymer solar cells (PSCs) have advanced rapidly during the past decade. In contrast, the development of parallel-connected tandem cells is lagging far behind due to the big challenge in establishing an efficient interlayer with high transparency and high in-plane conductivity. Here, we report all-solution fabrication of parallel tandem PSCs using silver nanowires as intermediate charge collecting electrode. Through a rational interface design, a robust interlayer is established, enabling the efficient extraction and transport of electrons from subcells. The resulting parallel tandem cells exhibit high fill factors of ∼60% and enhanced current densities which are identical to the sum of the current densities of the subcells. These results suggest that solution-processed parallel tandem configuration provides an alternative avenue toward high performance photovoltaic devices.

  18. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  19. A multi-pixel InSAR time series analysis method: Simultaneous estimation of atmospheric noise, orbital errors and deformation

    NASA Astrophysics Data System (ADS)

    Jolivet, R.; Simons, M.

    2016-12-01

    InSAR time series analysis allows reconstruction of ground deformation with meter-scale spatial resolution and high temporal sampling. For instance, the ESA Sentinel-1 Constellation is capable of providing 6-day temporal sampling, thereby opening a new window on the spatio-temporal behavior of tectonic processes. However, due to computational limitations, most time series methods rely on a pixel-by-pixel approach. This limitation is a concern because (1) accounting for orbital errors requires referencing all interferograms to a common set of pixels before reconstruction of the time series and (2) spatially correlated atmospheric noise due to tropospheric turbulence is ignored. Decomposing interferograms into statistically independent wavelets will mitigate issues of correlated noise, but prior estimation of orbital uncertainties will still be required. Here, we explore a method that considers all pixels simultaneously when solving for the spatio-temporal evolution of interferometric phase Our method is based on a massively parallel implementation of a conjugate direction solver. We consider an interferogram as the sum of the phase difference between 2 SAR acquisitions and the corresponding orbital errors. In addition, we fit the temporal evolution with a physically parameterized function while accounting for spatially correlated noise in the data covariance. We assume noise is isotropic for any given InSAR pair with a covariance described by an exponential function that decays with increasing separation distance between pixels. We regularize our solution in space using a similar exponential function as model covariance. Given the problem size, we avoid matrix multiplications of the full covariances by computing convolutions in the Fourier domain. We first solve the unregularized least squares problem using the LSQR algorithm to approach the final solution, then run our conjugate direction solver to account for data and model covariances. We present synthetic tests showing the efficiency of our method. We then reconstruct a 20-year continuous time series covering Northern Chile. Without input from any additional GNSS data, we recover the secular deformation rate, seasonal oscillations and the deformation fields from the 2005 Mw 7.8 Tarapaca and 2007 Mw 7.7 Tocopilla earthquakes.

  20. Pre-conceptual design of the Z-LLE accelerator.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stygar, William A.

    We begin with a model of 20 LTD modules, connected in parallel. We assume each LTD module consists of 10 LTD cavities, connected in series. We assume each cavity includes 20 LTD bricks, in parallel. Each brick is assumed to have a 40-nF capacitance and a 160-nH inductance. We use for this calculation the RLC-circuit model of an LTD system that was developed by Mazarakis and colleagues.

  1. Indoor air quality analysis based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tuo, Wang; Yunhua, Sun; Song, Tian; Liang, Yu; Weihong, Cui

    2014-03-01

    The air of the office environment is our research object. The data of temperature, humidity, concentrations of carbon dioxide, carbon monoxide and ammonia are collected peer one to eight seconds by the sensor monitoring system. And all the data are stored in the Hbase database of Hadoop platform. With the help of HBase feature of column-oriented store and versioned (automatically add the time column), the time-series data sets are bulit based on the primary key Row-key and timestamp. The parallel computing programming model MapReduce is used to process millions of data collected by sensors. By analysing the changing trend of parameters' value at different time of the same day and at the same time of various dates, the impact of human factor and other factors on the room microenvironment is achieved according to the liquidity of the office staff. Moreover, the effective way to improve indoor air quality is proposed in the end of this paper.

  2. Local sample thickness determination via scanning transmission electron microscopy defocus series.

    PubMed

    Beyer, A; Straubinger, R; Belz, J; Volz, K

    2016-05-01

    The usable aperture sizes in (scanning) transmission electron microscopy ((S)TEM) have significantly increased in the past decade due to the introduction of aberration correction. In parallel with the consequent increase of convergence angle the depth of focus has decreased severely and optical sectioning in the STEM became feasible. Here we apply STEM defocus series to derive the local sample thickness of a TEM sample. To this end experimental as well as simulated defocus series of thin Si foils were acquired. The systematic blurring of high resolution high angle annular dark field images is quantified by evaluating the standard deviation of the image intensity for each image of a defocus series. The derived dependencies exhibit a pronounced maximum at the optimum defocus and drop to a background value for higher or lower values. The full width half maximum (FWHM) of the curve is equal to the sample thickness above a minimum thickness given by the size of the used aperture and the chromatic aberration of the microscope. The thicknesses obtained from experimental defocus series applying the proposed method are in good agreement with the values derived from other established methods. The key advantages of this method compared to others are its high spatial resolution and that it does not involve any time consuming simulations. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  3. Electric Grid Expansion Planning with High Levels of Variable Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadley, Stanton W.; You, Shutang; Shankar, Mallikarjun

    2016-02-01

    Renewables are taking a large proportion of generation capacity in U.S. power grids. As their randomness has increasing influence on power system operation, it is necessary to consider their impact on system expansion planning. To this end, this project studies the generation and transmission expansion co-optimization problem of the US Eastern Interconnection (EI) power grid with a high wind power penetration rate. In this project, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. This study analyzed a time series creation method to capture the diversity of load and wind powermore » across balancing regions in the EI system. The obtained time series can be easily introduced into the MIP co-optimization problem and then solved robustly through available MIP solvers. Simulation results show that the proposed time series generation method and the expansion co-optimization model and can improve the expansion result significantly after considering the diversity of wind and load across EI regions. The improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare. This study shows that modelling load and wind variations and diversities across balancing regions will produce significantly different expansion result compared with former studies. For example, if wind is modeled in more details (by increasing the number of wind output levels) so that more wind blocks are considered in expansion planning, transmission expansion will be larger and the expansion timing will be earlier. Regarding generation expansion, more wind scenarios will slightly reduce wind generation expansion in the EI system and increase the expansion of other generation such as gas. Also, adopting detailed wind scenarios will reveal that it may be uneconomic to expand transmission networks for transmitting a large amount of wind power through a long distance in the EI system. Incorporating more details of renewables in expansion planning will inevitably increase the computational burden. Therefore, high performance computing (HPC) techniques are urgently needed for power system operation and planning optimization. As a scoping study task, this project tested some preliminary parallel computation techniques such as breaking down the simulation task into several sub-tasks based on chronology splitting or sample splitting, and then assigning these sub-tasks to different cores. Testing results show significant time reduction when a simulation task is split into several sub-tasks for parallel execution.« less

  4. Micro hollow cathode discharge jets utilizing solid fuel

    NASA Astrophysics Data System (ADS)

    Nikic, Dejan

    2017-10-01

    Micro hollow cathode discharge devices with a solid fuel layer embedded between the electrodes have demonstrated an enhanced jetting process. Outlined are series of experiments in various pressure and gas conditions as well as vacuum. Examples of use of these devices in series and parallel configurations are presented. Evidence of utilization of solid fuel is obtained through optical spectroscopy and analysis of remaining fuel layer.

  5. Algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations with the use of parallel computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moryakov, A. V., E-mail: sailor@orc.ru

    2016-12-15

    An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.

  6. Multi-objective problem of the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints

    NASA Astrophysics Data System (ADS)

    Amallynda, I.; Santosa, B.

    2017-11-01

    This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.

  7. Fast implementation for compressive recovery of highly accelerated cardiac cine MRI using the balanced sparse model.

    PubMed

    Ting, Samuel T; Ahmad, Rizwan; Jin, Ning; Craft, Jason; Serafim da Silveira, Juliana; Xue, Hui; Simonetti, Orlando P

    2017-04-01

    Sparsity-promoting regularizers can enable stable recovery of highly undersampled magnetic resonance imaging (MRI), promising to improve the clinical utility of challenging applications. However, lengthy computation time limits the clinical use of these methods, especially for dynamic MRI with its large corpus of spatiotemporal data. Here, we present a holistic framework that utilizes the balanced sparse model for compressive sensing and parallel computing to reduce the computation time of cardiac MRI recovery methods. We propose a fast, iterative soft-thresholding method to solve the resulting ℓ1-regularized least squares problem. In addition, our approach utilizes a parallel computing environment that is fully integrated with the MRI acquisition software. The methodology is applied to two formulations of the multichannel MRI problem: image-based recovery and k-space-based recovery. Using measured MRI data, we show that, for a 224 × 144 image series with 48 frames, the proposed k-space-based approach achieves a mean reconstruction time of 2.35 min, a 24-fold improvement compared a reconstruction time of 55.5 min for the nonlinear conjugate gradient method, and the proposed image-based approach achieves a mean reconstruction time of 13.8 s. Our approach can be utilized to achieve fast reconstruction of large MRI datasets, thereby increasing the clinical utility of reconstruction techniques based on compressed sensing. Magn Reson Med 77:1505-1515, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  8. Software Design for Real-Time Systems on Parallel Computers: Formal Specifications.

    DTIC Science & Technology

    1996-04-01

    This research investigated the important issues related to the analysis and design of real - time systems targeted to parallel architectures. In...particular, the software specification models for real - time systems on parallel architectures were evaluated. A survey of current formal methods for...uniprocessor real - time systems specifications was conducted to determine their extensibility in specifying real - time systems on parallel architectures. In

  9. Multiscale asymmetric orthogonal wavelet kernel for linear programming support vector learning and nonlinear dynamic systems identification.

    PubMed

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2014-05-01

    Support vector regression for approximating nonlinear dynamic systems is more delicate than the approximation of indicator functions in support vector classification, particularly for systems that involve multitudes of time scales in their sampled data. The kernel used for support vector learning determines the class of functions from which a support vector machine can draw its solution, and the choice of kernel significantly influences the performance of a support vector machine. In this paper, to bridge the gap between wavelet multiresolution analysis and kernel learning, the closed-form orthogonal wavelet is exploited to construct new multiscale asymmetric orthogonal wavelet kernels for linear programming support vector learning. The closed-form multiscale orthogonal wavelet kernel provides a systematic framework to implement multiscale kernel learning via dyadic dilations and also enables us to represent complex nonlinear dynamics effectively. To demonstrate the superiority of the proposed multiscale wavelet kernel in identifying complex nonlinear dynamic systems, two case studies are presented that aim at building parallel models on benchmark datasets. The development of parallel models that address the long-term/mid-term prediction issue is more intricate and challenging than the identification of series-parallel models where only one-step ahead prediction is required. Simulation results illustrate the effectiveness of the proposed multiscale kernel learning.

  10. Effect of periodic changes of angle of attack on behavior of airfoils

    NASA Technical Reports Server (NTRS)

    Katzmayr, R

    1922-01-01

    This report presents the results of a series of experiments, which gave some quantitative results on the effect of periodic changes in the direction of the relative air flow against airfoils. The first series of experiments concerned how the angle of attack of the wing model was changed by causing the latter to oscillate about an axis parallel to the span and at right angles to the air flow. The second series embraced all the experiments in which the direction of the air flow itself was periodically changed.

  11. Morphological evidence for parallel processing of information in rat macula.

    PubMed

    Ross, M D

    1988-01-01

    Study of montages, tracings and reconstructions prepared from a series of 570 consecutive ultrathin sections shows that rat maculas are morphologically organized for parallel processing of linear acceleratory information. Type II cells of one terminal field distribute information to neighboring terminals as well. The findings are examined in light of physiological data which indicate that macular receptor fields have a preferred directional vector, and are interpreted by analogy to a computer technology known as an information network.

  12. Between a Map and a Data Rod

    NASA Technical Reports Server (NTRS)

    Teng, William; Rui, Hualan; Strub, Richard; Vollmer, Bruce

    2015-01-01

    A Digital Divide has long stood between how NASA and other satellite-derived data are typically archived (time-step arrays or maps) and how hydrology and other point-time series oriented communities prefer to access those data. In essence, the desired method of data access is orthogonal to the way the data are archived. Our approach to bridging the Divide is part of a larger NASA-supported data rods project to enhance access to and use of NASA and other data by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) and the larger hydrology community. Our main objective was to determine a way to reorganize data that is optimal for these communities. Two related objectives were to optimally reorganize data in a way that (1) is operational and fits in and leverages the existing Goddard Earth Sciences Data and Information Services Center (GES DISC) operational environment and (2) addresses the scaling up of data sets available as time series from those archived at the GES DISC to potentially include those from other Earth Observing System Data and Information System (EOSDIS) data archives. Through several prototype efforts and lessons learned, we arrived at a non-database solution that satisfied our objectivesconstraints. We describe, in this presentation, how we implemented the operational production of pre-generated data rods and, considering the tradeoffs between length of time series (or number of time steps), resources needed, and performance, how we implemented the operational production of on-the-fly (virtual) data rods. For the virtual data rods, we leveraged a number of existing resources, including the NASA Giovanni Cache and NetCDF Operators (NCO) and used data cubes processed in parallel. Our current benchmark performance for virtual generation of data rods is about a years worth of time series for hourly data (9,000 time steps) in 90 seconds. Our approach is a specific implementation of the general optimal strategy of reorganizing data to match the desired means of access. Results from our project have already significantly extended NASA data to the large and important hydrology user community that has been, heretofore, mostly unable to easily access and use NASA data.

  13. Between a Map and a Data Rod

    NASA Astrophysics Data System (ADS)

    Teng, W. L.; Rui, H.; Strub, R. F.; Vollmer, B.

    2015-12-01

    A "Digital Divide" has long stood between how NASA and other satellite-derived data are typically archived (time-step arrays or "maps") and how hydrology and other point-time series oriented communities prefer to access those data. In essence, the desired method of data access is orthogonal to the way the data are archived. Our approach to bridging the Divide is part of a larger NASA-supported "data rods" project to enhance access to and use of NASA and other data by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) and the larger hydrology community. Our main objective was to determine a way to reorganize data that is optimal for these communities. Two related objectives were to optimally reorganize data in a way that (1) is operational and fits in and leverages the existing Goddard Earth Sciences Data and Information Services Center (GES DISC) operational environment and (2) addresses the scaling up of data sets available as time series from those archived at the GES DISC to potentially include those from other Earth Observing System Data and Information System (EOSDIS) data archives. Through several prototype efforts and lessons learned, we arrived at a non-database solution that satisfied our objectives/constraints. We describe, in this presentation, how we implemented the operational production of pre-generated data rods and, considering the tradeoffs between length of time series (or number of time steps), resources needed, and performance, how we implemented the operational production of on-the-fly ("virtual") data rods. For the virtual data rods, we leveraged a number of existing resources, including the NASA Giovanni Cache and NetCDF Operators (NCO) and used data cubes processed in parallel. Our current benchmark performance for virtual generation of data rods is about a year's worth of time series for hourly data (~9,000 time steps) in ~90 seconds. Our approach is a specific implementation of the general optimal strategy of reorganizing data to match the desired means of access. Results from our project have already significantly extended NASA data to the large and important hydrology user community that has been, heretofore, mostly unable to easily access and use NASA data.

  14. Geoelectric monitoring at the Boulder magnetic observatory

    USGS Publications Warehouse

    Blum, Cletus; White, Tim; Sauter, Edward A.; Stewart, Duff; Bedrosian, Paul A.; Love, Jeffrey J.

    2017-01-01

    Despite its importance to a range of applied and fundamental studies, and obvious parallels to a robust network of magnetic-field observatories, long-term geoelectric field monitoring is rarely performed. The installation of a new geoelectric monitoring system at the Boulder magnetic observatory of the US Geological Survey is summarized. Data from the system are expected, among other things, to be used for testing and validating algorithms for mapping North American geoelectric fields. An example time series of recorded electric and magnetic fields during a modest magnetic storm is presented. Based on our experience, we additionally present operational aspects of a successful geoelectric field monitoring system.

  15. A Cloud Based Real-Time Collaborative Platform for eHealth.

    PubMed

    Ionescu, Bogdan; Gadea, Cristian; Solomon, Bogdan; Ionescu, Dan; Stoicu-Tivadar, Vasile; Trifan, Mircea

    2015-01-01

    For more than a decade, the eHealth initiative has been a government concern of many countries. In an Electronic Health Record (EHR) System, there is a need for sharing the data with a group of specialists simultaneously. Collaborative platforms alone are just a part of a solution, while a collaborative platform with parallel editing capabilities and with synchronized data streaming are stringently needed. In this paper, the design and implementation of a collaborative platform used in healthcare is introduced by describing the high level architecture and its implementation. A series of eHealth services are identified and usage examples in a healthcare environment are given.

  16. Comparison of converter topologies for charging capacitors used in pulsed load applications

    NASA Technical Reports Server (NTRS)

    Nelms, R. M.; Schatz, J. E.; Pollard, Barry

    1991-01-01

    The authors present a qualitative comparison of different power converter topologies which may be utilized for charging capacitors in pulsed power applications requiring voltages greater than 1 kV. The operation of the converters in capacitor charging applications is described, and relevant advantages are presented. All of the converters except one may be classified in the high-frequency switching category. One of the benefits from high-frequency operation is a reduction in size and weight. The other converter discussed is a member of the command resonant changing category. The authors first describe a boost circuit which functions as a command resonant charging circuit and utilizes a single pulse of current to charge the capacitor. The discussion of high-frequency converters begins with the flyback and Ward converters. Then, the series, parallel, and series/parallel resonant converters are examined.

  17. Body Fluids Monitor

    NASA Technical Reports Server (NTRS)

    Siconolfi, Steven F. (Inventor)

    2000-01-01

    Method and apparatus are described for determining volumes of body fluids in a subject using bioelectrical response spectroscopy. The human body is represented using an electrical circuit. Intra-cellular water is represented by a resistor in series with a capacitor; extra-cellular water is represented by a resistor in series with two parallel inductors. The parallel inductors represent the resistance due to vascular fluids. An alternating, low amperage, multifrequency signal is applied to determine a subject's impedance and resistance. From these data, statistical regression is used to determine a 1% impedance where the subject's impedance changes by no more than 1% over a 25 kHz interval. Circuit component, of the human body circuit are determined based on the 1% impedance. Equations for calculating total body water, extra-cellular water, total blood volume, and plasma volume are developed based on the circuit components.

  18. Comparative Study of Hybrid Powertrains on Fuel Saving, Emissions, and Component Energy Loss in HD Trucks

    DOE PAGES

    Gao, Zhiming; Finney, Charles; Daw, Charles; ...

    2014-09-30

    We compared parallel and series hybrid powertrains on fuel economy, component energy loss, and emissions control in Class 8 trucks over both city and highway driving. A comprehensive set of component models describing battery energy, engine fuel efficiency, emissions control, and power demand interactions for heavy duty (HD) hybrids has been integrated with parallel and series hybrid Class 8 trucks in order to identify the technical barriers of these hybrid powertrain technologies. The results show that series hybrid is absolutely negative for fuel economy benefit of long-haul trucks due to an efficiency penalty associated with the dual-step conversions of energymore » (i.e. mechanical to electric to mechanical). The current parallel hybrid technology combined with 50% auxiliary load reduction could elevate 5-7% fuel economy of long-haul trucks, but a profound improvement of long-haul truck fuel economy requires additional innovative technologies for reducing aerodynamic drag and rolling resistance losses. The simulated emissions control indicates that hybrid trucks reduce more CO and HC emissions than conventional trucks. The simulated results further indicate that the catalyzed DPF played an important role in CO oxidations. Limited NH 3 emissions could be slipped from the Urea SCR, but the average NH 3 emissions are below 20 ppm. Meanwhile our estimations show 1.5-1.9% of equivalent fuel-cost penalty due to urea consumption in the simulated SCR cases.« less

  19. Introduction of the ASGARD Code

    NASA Technical Reports Server (NTRS)

    Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv; Fayock, Brian

    2017-01-01

    ASGARD stands for 'Automated Selection and Grouping of events in AIA Regional Data'. The code is a refinement of the event detection method in Ugarte-Urra & Warren (2014). It is intended to automatically detect and group brightenings ('events') in the AIA EUV channels, to record event parameters, and to find related events over multiple channels. Ultimately, the goal is to automatically determine heating and cooling timescales in the corona and to significantly increase statistics in this respect. The code is written in IDL and requires the SolarSoft library. It is parallelized and can run with multiple CPUs. Input files are regions of interest (ROIs) in time series of AIA images from the JSOC cutout service (http://jsoc.stanford.edu/ajax/exportdata.html). The ROIs need to be tracked, co-registered, and limited in time (typically 12 hours).

  20. Study of phase clustering method for analyzing large volumes of meteorological observation data

    NASA Astrophysics Data System (ADS)

    Volkov, Yu. V.; Krutikov, V. A.; Botygin, I. A.; Sherstnev, V. S.; Sherstneva, A. I.

    2017-11-01

    The article describes an iterative parallel phase grouping algorithm for temperature field classification. The algorithm is based on modified method of structure forming by using analytic signal. The developed method allows to solve tasks of climate classification as well as climatic zoning for any time or spatial scale. When used to surface temperature measurement series, the developed algorithm allows to find climatic structures with correlated changes of temperature field, to make conclusion on climate uniformity in a given area and to overview climate changes over time by analyzing offset in type groups. The information on climate type groups specific for selected geographical areas is expanded by genetic scheme of class distribution depending on change in mutual correlation level between ground temperature monthly average.

  1. Dynamic electrical reconfiguration for improved capacitor charging in microbial fuel cell stacks

    NASA Astrophysics Data System (ADS)

    Papaharalabos, George; Greenman, John; Stinchcombe, Andrew; Horsfield, Ian; Melhuish, Chris; Ieropoulos, Ioannis

    2014-12-01

    A microbial fuel cell (MFC) is a bioelectrochemical device that uses anaerobic bacteria to convert chemical energy locked in biomass into small amounts of electricity. One viable way of increasing energy extraction is by stacking multiple MFC units and exploiting the available electrical configurations for increasing the current or stepping up the voltage. The present study illustrates how a real-time electrical reconfiguration of MFCs in a stack, halves the time required to charge a capacitor (load) and achieves 35% higher current generation compared to a fixed electrical configuration. This is accomplished by progressively switching in-parallel elements to in-series units in the stack, thus maintaining an optimum potential difference between the stack and the capacitor, which in turn allows for a higher energy transfer.

  2. High Frequency Analog LSI Development.

    DTIC Science & Technology

    1979-11-12

    made to incorporate more optimal values through appropriate series or parallel connection of the four capacitors embedded on chip. The use of SPICE-2...collector of Q4 to about 50Q with some inductive reactance at the output. An external capacitor in series with the chip output serves as a DC block...by Under authority of CE Holland, Head CD Pierson, Jr, Head Advanced Applications Electronics Engineering Division and Sciences Department / f

  3. Hemodynamics assessed via approximate entropy analysis of impedance cardiography time series: effect of metabolic syndrome.

    PubMed

    Guerra, Stefania; Boscari, Federico; Avogaro, Angelo; Di Camillo, Barbara; Sparacino, Giovanni; de Kreutzenberg, Saula Vigili

    2011-08-01

    The metabolic syndrome (MS), a predisposing condition for cardiovascular disease, presents disturbances in hemodynamics; impedance cardiography (ICG) can assess these alterations. In subjects with MS, the morphology of the pulses present in the ICG time series is more irregular/complex than in normal subjects. Therefore, the aim of the present study was to quantitatively assess the complexity of ICG times series in 53 patients, with or without MS, through a nonlinear analysis algorithm, the approximate entropy, a method employed in recent years for the study of several biological signals, which provides a scalar index, ApEn. We correlated ApEn computed from ICG times series data during fasting and postprandial phase with the presence of alterations in the parameters defining MS [Adult Treatment Panel (ATP) III (Grundy SM, Brewer HB Jr, Cleeman JI, Smith SC Jr, Lenfant C; National Heart, Lung, and Blood Institute; American Heart Association. Circulation 109: 433-438, 2004) and the International Diabetes Federation (IDF) definition]. Results show that ApEn was significantly higher in subjects with MS compared with those without (1.81 ± 0.09 vs. 1.65 ± 0.13; means ± SD; P = 0.0013, with ATP III definition; 1.82 ± 0.09 vs. 1.67 ± 0.12; P = 0.00006, with the IDF definition). We also demonstrated that ApEn increase parallels the number of components of MS. ApEn was then correlated to each MS component: mean ApEn values of subjects belonging to the first and fourth quartiles of the distribution of MS parameters were statistically different for all parameters but HDL cholesterol. No difference was observed between ApEn values evaluated in fasting and postprandial states. In conclusion, we identified that MS is characterized by an increased complexity of ICG signals: this may have a prognostic relevance in subjects with this condition.

  4. Distributed multitasking ITS with PVM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, W.C.; Halbleib, J.A. Sr.

    1995-12-31

    Advances in computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable to or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generatedmore » in a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of the MCNP code on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electronphoton transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load-balancing schemes for homogeneous and heterogeneous networks.« less

  5. Distributed multitasking ITS with PVM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, W.C.; Halbleib, J.A. Sr.

    1995-02-01

    Advances of computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources, and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generated inmore » a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of MCNP on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electron/photon transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load balancing schemes for homogeneous and heterogeneous networks.« less

  6. Impact of automatization in temperature series in Spain and comparison with the POST-AWS dataset

    NASA Astrophysics Data System (ADS)

    Aguilar, Enric; López-Díaz, José Antonio; Prohom Duran, Marc; Gilabert, Alba; Luna Rico, Yolanda; Venema, Victor; Auchmann, Renate; Stepanek, Petr; Brandsma, Theo

    2016-04-01

    Climate data records are most of the times affected by inhomogeneities. Especially inhomogeneities introducing network-wide biases are sometimes related to changes happening almost simultaneously in an entire network. Relative homogenization is difficult in these cases, especially at the daily scale. A good example of this is the substitution of manual observations (MAN) by automatic weather stations (AWS). Parallel measurements (i.e. records taken at the same time with the old (MAN) and new (AWS) sensors can provide an idea of the bias introduced and help to evaluate the suitability of different correction approaches. We present here a quality controlled dataset compiled under the DAAMEC Project, comprising 46 stations across Spain and over 85,000 parallel measurements (AWS-MAN) of daily maximum and minimum temperature. We study the differences between both sensors and compare it with the available metadata to account for internal inhomogeneities. The differences between both systems vary much across stations, with patterns more related to their particular settings than to climatic/geographical reasons. The typical median biases (AWS-MAN) by station (comprised between the interquartile range) oscillate between -0.2°C and 0.4 in daily maximum temperature and between -0.4°C and 0.2°C in daily minimum temperature. These and other results are compared with a larger network, the Parallel Observations Scientific Team, a working group of the International Surface Temperatures Initiative (ISTI-POST) dataset, which comprises our stations, as well as others from different countries in America, Asia and Europe.

  7. Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations

    NASA Astrophysics Data System (ADS)

    Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.

    2016-07-01

    Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.

  8. Parallel CE/SE Computations via Domain Decomposition

    NASA Technical Reports Server (NTRS)

    Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung

    2000-01-01

    This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.

  9. Characterizing and Mitigating Work Time Inflation in Task Parallel Programs

    DOE PAGES

    Olivier, Stephen L.; de Supinski, Bronis R.; Schulz, Martin; ...

    2013-01-01

    Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, and work time inflation – additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMA systems.more » Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.« less

  10. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brix, Lau, E-mail: lau.brix@stab.rm.dk; Ringgaard, Steffen; Sørensen, Thomas Sangild

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (ormore » tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal, and coronal 2D MRI series yielded 3D respiratory motion curves for all volunteers. The motion directionality and amplitude were very similar when measured directly as in-plane motion or estimated indirectly as through-plane motion. The mean peak-to-peak breathing amplitude was 1.6 mm (left-right), 11.0 mm (craniocaudal), and 2.5 mm (anterior-posterior). The position of the watermelon structure was estimated in 2D MRI images with a root-mean-square error of 0.52 mm (in-plane) and 0.87 mm (through-plane). Conclusions: A method for 3D tracking in 2D MRI series was developed and demonstrated for liver tracking in volunteers. The method would allow real-time 3D localization with integrated MR-Linac systems.« less

  11. An Expert System for Automating Nuclear Strike Aircraft Replacement, Aircraft Beddown, and Logistics Movement for the Theater Warfare Exercise.

    DTIC Science & Technology

    1989-12-01

    that can be easily understood. (9) Parallelism. Several system components may need to execute in parallel. For example, the processing of sensor data...knowledge base are not accessible for processing by the database. Also in the likely case that the expert system poses a series of related queries, the...hiharken nxpfilcs’Iog - Knowledge base for the automation of loCgistics rr-ovenet T’he Ii rectorY containing the strike aircraft replacement knowledge base

  12. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  13. A Parallel First-Order Linear Recurrence Solver.

    DTIC Science & Technology

    1986-09-01

    1og2 -M)steps, but p M did not discuss any specific parallel implementation. Gajski [GAJ81] improved upon this result by performing the SIMD computation...solves a series of reduced recurrences of size p 2. However, when N = p 2, our approach reduces to that of I’- [GAJ81], except that Gajski presents the...existing SIMD algorithms to solve R<N,1>, the SIMD algo- rithm presented by Gajski [GAJ81] can be most efficiently mapped to a uni- directional ring

  14. Metrological characterization of X-ray diffraction methods at different acquisition geometries for determination of crystallite size in nano-scale materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uvarov, Vladimir, E-mail: vladimiru@savion.huji.ac.il; Popov, Inna

    2013-11-15

    Crystallite size values were determined by X-ray diffraction methods for 183 powder samples. The tested size range was from a few to about several hundred nanometers. Crystallite size was calculated with direct use of the Scherrer equation, the Williamson–Hall method and the Rietveld procedure via the application of a series of commercial and free software. The results were statistically treated to estimate the significance of the difference in size resulting from these methods. We also estimated effect of acquisition conditions (Bragg–Brentano, parallel-beam geometry, step size, counting time) and data processing on the calculated crystallite size values. On the basis ofmore » the obtained results it is possible to conclude that direct use of the Scherrer equation, Williamson–Hall method and the Rietveld refinement employed by a series of software (EVA, PCW and TOPAS respectively) yield very close results for crystallite sizes less than 60 nm for parallel beam geometry and less than 100 nm for Bragg–Brentano geometry. However, we found that despite the fact that the differences between the crystallite sizes, which were calculated by various methods, are small by absolute values, they are statistically significant in some cases. The values of crystallite size determined from XRD were compared with those obtained by imaging in a transmission (TEM) and scanning electron microscopes (SEM). It was found that there was a good correlation in size only for crystallites smaller than 50 – 60 nm. Highlights: • The crystallite sizes for 183 nanopowders were calculated using different XRD methods • Obtained results were subject to statistical treatment • Results obtained with Bragg-Brentano and parallel beam geometries were compared • Influence of conditions of XRD pattern acquisition on results was estimated • Calculated by XRD crystallite sizes were compared with same obtained by TEM and SEM.« less

  15. Automated high-throughput flow-through real-time diagnostic system

    DOEpatents

    Regan, John Frederick

    2012-10-30

    An automated real-time flow-through system capable of processing multiple samples in an asynchronous, simultaneous, and parallel fashion for nucleic acid extraction and purification, followed by assay assembly, genetic amplification, multiplex detection, analysis, and decontamination. The system is able to hold and access an unlimited number of fluorescent reagents that may be used to screen samples for the presence of specific sequences. The apparatus works by associating extracted and purified sample with a series of reagent plugs that have been formed in a flow channel and delivered to a flow-through real-time amplification detector that has a multiplicity of optical windows, to which the sample-reagent plugs are placed in an operative position. The diagnostic apparatus includes sample multi-position valves, a master sample multi-position valve, a master reagent multi-position valve, reagent multi-position valves, and an optical amplification/detection system.

  16. Parallel and series FED microstrip array with high efficiency and low cross polarization

    NASA Technical Reports Server (NTRS)

    Huang, John (Inventor)

    1995-01-01

    A microstrip array antenna for vertically polarized fan beam (approximately 2 deg x 50 deg) for C-band SAR applications with a physical area of 1.7 m by 0.17 m comprises two rows of patch elements and employs a parallel feed to left- and right-half sections of the rows. Each section is divided into two segments that are fed in parallel with the elements in each segment fed in series through matched transmission lines for high efficiency. The inboard section has half the number of patch elements of the outboard section, and the outboard sections, which have tapered distribution with identical transmission line sections, terminated with half wavelength long open-circuit stubs so that the remaining energy is reflected and radiated in phase. The elements of the two inboard segments of the two left- and right-half sections are provided with tapered transmission lines from element to element for uniform power distribution over the central third of the entire array antenna. The two rows of array elements are excited at opposite patch feed locations with opposite (180 deg difference) phases for reduced cross-polarization.

  17. Dioxythiophene-based polymer electrodes for supercapacitor modules.

    PubMed

    Liu, David Y; Reynolds, John R

    2010-12-01

    We report on the electrochemical and capacitive behaviors of poly(2,2-dimethyl-3,4-propylene-dioxythipohene) (PProDOT-Me2) films as polymeric electrodes in Type I electrochemical supercapacitors. The supercapacitor device displays robust capacitive charging/discharging behaviors with specific capacitance of 55 F/g, based on 60 μg of PProDOT-Me2 per electrode, that retains over 85% of its storage capacity after 32 000 redox cycles at 78% depth of discharge. Moreover, an appreciable average energy density of 6 Wh/kg has been calculated for the device, along with well-behaved and rapid capacitive responses to 1.0 V between 5 to 500 mV s(-1). Tandem electrochemical supercapacitors were assembled in series, in parallel, and in combinations of the two to widen the operating voltage window and to increase the capacitive currents. Four supercapacitors coupled in series exhibited a 4.0 V charging/discharging window, whereas assembly in parallel displayed a 4-fold increase in capacitance. Combinations of both serial and parallel assembly with six supercapacitors resulted in the extension of voltage to 3 V and a 2-fold increase in capacitive currents. Utilization of bipolar electrodes facilitated the encapsulation of tandem supercapacitors as individual, flexible, and lightweight supercapacitor modules.

  18. Source model for the Copahue volcano magma plumbing system constrained by InSAR surface deformation observations

    NASA Astrophysics Data System (ADS)

    Lundgren, Paul; Nikkhoo, Mehdi; Samsonov, Sergey V.; Milillo, Pietro; Gil-Cruz, Fernando; Lazo, Jonathan

    2017-07-01

    Copahue volcano straddling the edge of the Agrio-Caviahue caldera along the Chile-Argentina border in the southern Andes has been in unrest since inflation began in late 2011. We constrain Copahue's source models with satellite and airborne interferometric synthetic aperture radar (InSAR) deformation observations. InSAR time series from descending track RADARSAT-2 and COSMO-SkyMed data span the entire inflation period from 2011 to 2016, with their initially high rates of 12 and 15 cm/yr, respectively, slowing only slightly despite ongoing small eruptions through 2016. InSAR ascending and descending track time series for the 2013-2016 time period constrain a two-source compound dislocation model, with a rate of volume increase of 13 × 106 m3/yr. They consist of a shallow, near-vertical, elongated source centered at 2.5 km beneath the summit and a deeper, shallowly plunging source centered at 7 km depth connecting the shallow source to the deeper caldera. The deeper source is located directly beneath the volcano tectonic seismicity with the lower bounds of the seismicity parallel to the plunge of the deep source. InSAR time series also show normal fault offsets on the NE flank Copahue faults. Coulomb stress change calculations for right-lateral strike slip (RLSS), thrust, and normal receiver faults show positive values in the north caldera for both RLSS and normal faults, suggesting that northward trending seismicity and Copahue fault motion within the caldera are caused by the modeled sources. Together, the InSAR-constrained source model and the seismicity suggest a deep conduit or transfer zone where magma moves from the central caldera to Copahue's upper edifice.

  19. Contribution à la systématique des laves alcalines, les laves du rift de l'Afrique Centrale (Zaïre-Uganda)

    NASA Astrophysics Data System (ADS)

    Pouclet, A.

    1980-09-01

    The lavas of the Central Africa rift (Western rift) are distributed in three groups with increasing alkalinity. The petrographical and chemical data give a classification of seven series: one series of alkaline-basalts in the first weakly alkaline group, two basanitic, sodic or potassic, series in the second fairly alkaline group, and four nephelinitic, melilitic, perpotassic or carbonatitic series in the third strongly alkaline group. The definitions of all these lavas are reviewed. We propose a simplified terminology with, in particular, a K-lavas’ nomenclature parallel to the Na-lavas’ one and a division using the DI of Thornton and Tuttle (1960).

  20. Anomalous Anticipatory Responses in Networked Random Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Roger D.; Bancel, Peter A.

    2006-10-16

    We examine an 8-year archive of synchronized, parallel time series of random data from a world spanning network of physical random event generators (REGs). The archive is a publicly accessible matrix of normally distributed 200-bit sums recorded at 1 Hz which extends from August 1998 to the present. The primary question is whether these data show non-random structure associated with major events such as natural or man-made disasters, terrible accidents, or grand celebrations. Secondarily, we examine the time course of apparently correlated responses. Statistical analyses of the data reveal consistent evidence that events which strongly affect people engender small butmore » significant effects. These include suggestions of anticipatory responses in some cases, leading to a series of specialized analyses to assess possible non-random structure preceding precisely timed events. A focused examination of data collected around the time of earthquakes with Richter magnitude 6 and greater reveals non-random structure with a number of intriguing, potentially important features. Anomalous effects in the REG data are seen only when the corresponding earthquakes occur in populated areas. No structure is found if they occur in the oceans. We infer that an important contributor to the effect is the relevance of the earthquake to humans. Epoch averaging reveals evidence for changes in the data some hours prior to the main temblor, suggestive of reverse causation.« less

  1. Trends and Correlation Estimation in Climate Sciences: Effects of Timescale Errors

    NASA Astrophysics Data System (ADS)

    Mudelsee, M.; Bermejo, M. A.; Bickert, T.; Chirila, D.; Fohlmeister, J.; Köhler, P.; Lohmann, G.; Olafsdottir, K.; Scholz, D.

    2012-12-01

    Trend describes time-dependence in the first moment of a stochastic process, and correlation measures the linear relation between two random variables. Accurately estimating the trend and correlation, including uncertainties, from climate time series data in the uni- and bivariate domain, respectively, allows first-order insights into the geophysical process that generated the data. Timescale errors, ubiquitious in paleoclimatology, where archives are sampled for proxy measurements and dated, poses a problem to the estimation. Statistical science and the various applied research fields, including geophysics, have almost completely ignored this problem due to its theoretical almost-intractability. However, computational adaptations or replacements of traditional error formulas have become technically feasible. This contribution gives a short overview of such an adaptation package, bootstrap resampling combined with parametric timescale simulation. We study linear regression, parametric change-point models and nonparametric smoothing for trend estimation. We introduce pairwise-moving block bootstrap resampling for correlation estimation. Both methods share robustness against autocorrelation and non-Gaussian distributional shape. We shortly touch computing-intensive calibration of bootstrap confidence intervals and consider options to parallelize the related computer code. Following examples serve not only to illustrate the methods but tell own climate stories: (1) the search for climate drivers of the Agulhas Current on recent timescales, (2) the comparison of three stalagmite-based proxy series of regional, western German climate over the later part of the Holocene, and (3) trends and transitions in benthic oxygen isotope time series from the Cenozoic. Financial support by Deutsche Forschungsgemeinschaft (FOR 668, FOR 1070, MU 1595/4-1) and the European Commission (MC ITN 238512, MC ITN 289447) is acknowledged.

  2. Use of interrupted time series analysis in evaluating health care quality improvements.

    PubMed

    Penfold, Robert B; Zhang, Fang

    2013-01-01

    Interrupted time series (ITS) analysis is arguably the strongest quasi-experimental research design. ITS is particularly useful when a randomized trial is infeasible or unethical. The approach usually involves constructing a time series of population-level rates for a particular quality improvement focus (eg, rates of attention-deficit/hyperactivity disorder [ADHD] medication initiation) and testing statistically for a change in the outcome rate in the time periods before and time periods after implementation of a policy/program designed to change the outcome. In parallel, investigators often analyze rates of negative outcomes that might be (unintentionally) affected by the policy/program. We discuss why ITS is a useful tool for quality improvement. Strengths of ITS include the ability to control for secular trends in the data (unlike a 2-period before-and-after t test), ability to evaluate outcomes using population-level data, clear graphical presentation of results, ease of conducting stratified analyses, and ability to evaluate both intended and unintended consequences of interventions. Limitations of ITS include the need for a minimum of 8 time periods before and 8 after an intervention to evaluate changes statistically, difficulty in analyzing the independent impact of separate components of a program that are implemented close together in time, and existence of a suitable control population. Investigators must also be careful not to make individual-level inferences when population-level rates are used to evaluate interventions (though ITS can be used with individual-level data). A brief description of ITS is provided, including a fully implemented (but hypothetical) study of the impact of a program to reduce ADHD medication initiation in children younger than 5 years old and insured by Medicaid in Washington State. An example of the database needed to conduct an ITS is provided, as well as SAS code to implement a difference-in-differences model using preschool-age children in California as a comparison group. Copyright © 2013 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  3. Towards Exascale Seismic Imaging and Inversion

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Bozdag, E.; Lefebvre, M. P.; Smith, J. A.; Lei, W.; Ruan, Y.

    2015-12-01

    Post-petascale supercomputers are now available to solve complex scientific problems that were thought unreachable a few decades ago. They also bring a cohort of concerns tied to obtaining optimum performance. Several issues are currently being investigated by the HPC community. These include energy consumption, fault resilience, scalability of the current parallel paradigms, workflow management, I/O performance and feature extraction with large datasets. In this presentation, we focus on the last three issues. In the context of seismic imaging and inversion, in particular for simulations based on adjoint methods, workflows are well defined.They consist of a few collective steps (e.g., mesh generation or model updates) and of a large number of independent steps (e.g., forward and adjoint simulations of each seismic event, pre- and postprocessing of seismic traces). The greater goal is to reduce the time to solution, that is, obtaining a more precise representation of the subsurface as fast as possible. This brings us to consider both the workflow in its entirety and the parts comprising it. The usual approach is to speedup the purely computational parts based on code optimization in order to reach higher FLOPS and better memory management. This still remains an important concern, but larger scale experiments show that the imaging workflow suffers from severe I/O bottlenecks. Such limitations occur both for purely computational data and seismic time series. The latter are dealt with by the introduction of a new Adaptable Seismic Data Format (ASDF). Parallel I/O libraries, namely HDF5 and ADIOS, are used to drastically reduce the cost of disk access. Parallel visualization tools, such as VisIt, are able to take advantage of ADIOS metadata to extract features and display massive datasets. Because large parts of the workflow are embarrassingly parallel, we are investigating the possibility of automating the imaging process with the integration of scientific workflow management tools, specifically Pegasus.

  4. Algebraic multigrid preconditioning within parallel finite-element solvers for 3-D electromagnetic modelling problems in geophysics

    NASA Astrophysics Data System (ADS)

    Koldan, Jelena; Puzyrev, Vladimir; de la Puente, Josep; Houzeaux, Guillaume; Cela, José María

    2014-06-01

    We present an elaborate preconditioning scheme for Krylov subspace methods which has been developed to improve the performance and reduce the execution time of parallel node-based finite-element (FE) solvers for 3-D electromagnetic (EM) numerical modelling in exploration geophysics. This new preconditioner is based on algebraic multigrid (AMG) that uses different basic relaxation methods, such as Jacobi, symmetric successive over-relaxation (SSOR) and Gauss-Seidel, as smoothers and the wave front algorithm to create groups, which are used for a coarse-level generation. We have implemented and tested this new preconditioner within our parallel nodal FE solver for 3-D forward problems in EM induction geophysics. We have performed series of experiments for several models with different conductivity structures and characteristics to test the performance of our AMG preconditioning technique when combined with biconjugate gradient stabilized method. The results have shown that, the more challenging the problem is in terms of conductivity contrasts, ratio between the sizes of grid elements and/or frequency, the more benefit is obtained by using this preconditioner. Compared to other preconditioning schemes, such as diagonal, SSOR and truncated approximate inverse, the AMG preconditioner greatly improves the convergence of the iterative solver for all tested models. Also, when it comes to cases in which other preconditioners succeed to converge to a desired precision, AMG is able to considerably reduce the total execution time of the forward-problem code-up to an order of magnitude. Furthermore, the tests have confirmed that our AMG scheme ensures grid-independent rate of convergence, as well as improvement in convergence regardless of how big local mesh refinements are. In addition, AMG is designed to be a black-box preconditioner, which makes it easy to use and combine with different iterative methods. Finally, it has proved to be very practical and efficient in the parallel context.

  5. Extending substructure based iterative solvers to multiple load and repeated analyses

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel

    1993-01-01

    Direct solvers currently dominate commercial finite element structural software, but do not scale well in the fine granularity regime targeted by emerging parallel processors. Substructure based iterative solvers--often called also domain decomposition algorithms--lend themselves better to parallel processing, but must overcome several obstacles before earning their place in general purpose structural analysis programs. One such obstacle is the solution of systems with many or repeated right hand sides. Such systems arise, for example, in multiple load static analyses and in implicit linear dynamics computations. Direct solvers are well-suited for these problems because after the system matrix has been factored, the multiple or repeated solutions can be obtained through relatively inexpensive forward and backward substitutions. On the other hand, iterative solvers in general are ill-suited for these problems because they often must restart from scratch for every different right hand side. In this paper, we present a methodology for extending the range of applications of domain decomposition methods to problems with multiple or repeated right hand sides. Basically, we formulate the overall problem as a series of minimization problems over K-orthogonal and supplementary subspaces, and tailor the preconditioned conjugate gradient algorithm to solve them efficiently. The resulting solution method is scalable, whereas direct factorization schemes and forward and backward substitution algorithms are not. We illustrate the proposed methodology with the solution of static and dynamic structural problems, and highlight its potential to outperform forward and backward substitutions on parallel computers. As an example, we show that for a linear structural dynamics problem with 11640 degrees of freedom, every time-step beyond time-step 15 is solved in a single iteration and consumes 1.0 second on a 32 processor iPSC-860 system; for the same problem and the same parallel processor, a pair of forward/backward substitutions at each step consumes 15.0 seconds.

  6. Detrending Algorithms in Large Time Series: Application to TFRM-PSES Data

    NASA Astrophysics Data System (ADS)

    del Ser, D.; Fors, O.; Núñez, J.; Voss, H.; Rosich, A.; Kouprianov, V.

    2015-07-01

    Certain instrumental effects and data reduction anomalies introduce systematic errors in photometric time series. Detrending algorithms such as the Trend Filtering Algorithm (TFA; Kovács et al. 2004) have played a key role in minimizing the effects caused by these systematics. Here we present the results obtained after applying the TFA, Savitzky & Golay (1964) detrending algorithms, and the Box Least Square phase-folding algorithm (Kovács et al. 2002) to the TFRM-PSES data (Fors et al. 2013). Tests performed on these data show that by applying these two filtering methods together the photometric RMS is on average improved by a factor of 3-4, with better efficiency towards brighter magnitudes, while applying TFA alone yields an improvement of a factor 1-2. As a result of this improvement, we are able to detect and analyze a large number of stars per TFRM-PSES field which present some kind of variability. Also, after porting these algorithms to Python and parallelizing them, we have improved, even for large data samples, the computational performance of the overall detrending+BLS algorithm by a factor of ˜10 with respect to Kovács et al. (2004).

  7. New trends in Taylor series based applications

    NASA Astrophysics Data System (ADS)

    Kocina, Filip; Šátek, Václav; Veigend, Petr; Nečasová, Gabriela; Valenta, Václav; Kunovský, Jiří

    2016-06-01

    The paper deals with the solution of large system of linear ODEs when minimal comunication among parallel processors is required. The Modern Taylor Series Method (MTSM) is used. The MTSM allows using a higher order during the computation that means a larger integration step size while keeping desired accuracy. As an example of complex systems we can take the Telegraph Equation Model. Symbolic and numeric solutions are compared when harmonic input signal is used.

  8. Dynamics of transit times and StorAge Selection functions in four forested catchments from stable isotope data

    NASA Astrophysics Data System (ADS)

    Rodriguez, Nicolas B.; McGuire, Kevin J.; Klaus, Julian

    2017-04-01

    Transit time distributions, residence time distributions and StorAge Selection functions are fundamental integrated descriptors of water storage, mixing, and release in catchments. In this contribution, we determined these time-variant functions in four neighboring forested catchments in H.J. Andrews Experimental Forest, Oregon, USA by employing a two year time series of 18O in precipitation and discharge. Previous studies in these catchments assumed stationary, exponentially distributed transit times, and complete mixing/random sampling to explore the influence of various catchment properties on the mean transit time. Here we relaxed such assumptions to relate transit time dynamics and the variability of StoreAge Selection functions to catchment characteristics, catchment storage, and meteorological forcing seasonality. Conceptual models of the catchments, consisting of two reservoirs combined in series-parallel, were calibrated to discharge and stable isotope tracer data. We assumed randomly sampled/fully mixed conditions for each reservoir, which resulted in an incompletely mixed system overall. Based on the results we solved the Master Equation, which describes the dynamics of water ages in storage and in catchment outflows Consistent between all catchments, we found that transit times were generally shorter during wet periods, indicating the contribution of shallow storage (soil, saprolite) to discharge. During extended dry periods, transit times increased significantly indicating the contribution of deeper storage (bedrock) to discharge. Our work indicated that the strong seasonality of precipitation impacted transit times by leading to a dynamic selection of stored water ages, whereas catchment size was not a control on transit times. In general this work showed the usefulness of using time-variant transit times with conceptual models and confirmed the existence of the catchment age mixing behaviors emerging from other similar studies.

  9. Streamlined approach to high-quality purification and identification of compound series using high-resolution MS and NMR.

    PubMed

    Mühlebach, Anneke; Adam, Joachim; Schön, Uwe

    2011-11-01

    Automated medicinal chemistry (parallel chemistry) has become an integral part of the drug-discovery process in almost every large pharmaceutical company. Parallel array synthesis of individual organic compounds has been used extensively to generate diverse structural libraries to support different phases of the drug-discovery process, such as hit-to-lead, lead finding, or lead optimization. In order to guarantee effective project support, efficiency in the production of compound libraries has been maximized. As a consequence, also throughput in chromatographic purification and analysis has been adapted. As a recent trend, more laboratories are preparing smaller, yet more focused libraries with even increasing demands towards quality, i.e. optimal purity and unambiguous confirmation of identity. This paper presents an automated approach how to combine effective purification and structural conformation of a lead optimization library created by microwave-assisted organic synthesis. The results of complementary analytical techniques such as UHPLC-HRMS and NMR are not only regarded but even merged for fast and easy decision making, providing optimal quality of compound stock. In comparison with the previous procedures, throughput times are at least four times faster, while compound consumption could be decreased more than threefold. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Comparison of the analgesic efficacy of oral ketorolac versus intramuscular tramadol after third molar surgery: A parallel, double-blind, randomized, placebo-controlled clinical trial

    PubMed Central

    Isiordia-Espinoza, Mario-Alberto; Martinez-Rider, Ricardo; Perez-Urizar, Jose

    2016-01-01

    Background Preemptive analgesia is considered an alternative for treating the postsurgical pain of third molar removal. The aim of this study was to evaluate the preemptive analgesic efficacy of oral ketorolac versus intramuscular tramadol after a mandibular third molar surgery. Material and Methods A parallel, double-blind, randomized, placebo-controlled clinical trial was carried out. Thirty patients were randomized into two treatment groups using a series of random numbers: Group A, oral ketorolac 10 mg plus intramuscular placebo (1 mL saline solution); or Group B, oral placebo (similar tablet to oral ketorolac) plus intramuscular tramadol 50 mg diluted in 1 mL saline solution. These treatments were given 30 min before the surgery. We evaluated the time of first analgesic rescue medication, pain intensity, total analgesic consumption and adverse effects. Results Patients taking oral ketorolac had longer time of analgesic covering and less postoperative pain when compared with patients receiving intramuscular tramadol. Conclusions According to the VAS and AUC results, this study suggests that 10 mg of oral ketorolac had superior analgesic effect than 50 mg of tramadol when administered before a mandibular third molar surgery. Key words:Ketorolac, tramadol, third molar surgery, pain, preemptive analgesia. PMID:27475688

  11. Comparative evaluation of the liver in dogs with a splenic mass by using ultrasonography and contrast-enhanced computed tomography

    PubMed Central

    Irausquin, Roelof A.; Scavelli, Thomas D.; Corti, Lisa; Stefanacci, Joseph D.; DeMarco, Joann; Flood, Shannon; Rohrbach, Barton W.

    2008-01-01

    Evaluation of dogs with splenic masses to better educate owners as to the extent of the disease is a goal of many research studies. We compared the use of ultrasonography (US) and contrast-enhanced computed tomography (CT) to evaluate the accuracy of detecting hepatic neoplasia in dogs with splenic masses, independently, in series, or in parallel. No significant difference was found between US and CT. If the presence or absence of ascites, as detected with US, was used as a pretest probability of disease in our population, the positive predictive value increased to 94% if the tests were run in series, and the negative predictive value increased to 95% if the tests were run in parallel. The study showed that CT combined with US could be a valuable tool in evaluation of dogs with splenic masses. PMID:18320977

  12. Modified current follower-based immittance function simulators

    NASA Astrophysics Data System (ADS)

    Alpaslan, Halil; Yuce, Erkan

    2017-12-01

    In this paper, four immittance function simulators consisting of a single modified current follower with single Z- terminal and a minimum number of passive components are proposed. The first proposed circuit can provide +L parallel with +R and the second proposed one can realise -L parallel with -R. The third proposed structure can provide +L series with +R and the fourth proposed one can realise -L series with -R. However, all the proposed immittance function simulators need a single resistive matching constraint. Parasitic impedance effects on all the proposed immittance function simulators are investigated. A second-order current-mode (CM) high-pass filter derived from the first proposed immittance function simulator is given as an application example. Also, a second-order CM low-pass filter derived from the third proposed immittance function simulator is given as an application example. A number of simulation results based on SPICE programme and an experimental test result are given to verify the theory.

  13. Parallel Aircraft Trajectory Optimization with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Gray, Justin S.; Naylor, Bret

    2016-01-01

    Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.

  14. Studies on complex π-π and T-stacking features of imidazole and phenyl/p-halophenyl units in series of 5-amino-1-(phenyl/p-halophenyl)imidazole-4-carboxamides and their carbonitrile derivatives: Role of halogens in tuning of conformation

    NASA Astrophysics Data System (ADS)

    Das, Aniruddha

    2017-11-01

    5-amino-1-(phenyl/p-halophenyl)imidazole-4-carboxamides (N-phenyl AICA) (2a-e) and 5-amino-1-(phenyl/p-halophenyl)imidazole-4-carbonitriles (N-phenyl AICN) (3a-e) had been synthesized. X-ray crystallographic studies of 2a-e and 3a-e had been performed to identify any distinct change in stacking patterns in their crystal lattice. Single crystal X-ray diffraction studies of 2a-e revealed π-π stack formations with both imidazole and phenyl/p-halophenyl units in anti and syn parallel-displaced (PD)-type dispositions. No π-π stacking of imidazole occurred when the halogen substituent is bromo or iodo; π-π stacking in these cases occurred involving phenyl rings only. The presence of an additional T-stacking had been observed in crystal lattices of 3a-e. Vertical π-π stacking distances in anti-parallel PD-type arrangements as well as T-stacking distances had shown stacking distances short enough to impart stabilization whereas syn-parallel stacking arrangements had got much larger π-π stacking distances to belie any syn-parallel stacking stabilization. DFT studies had been pursued for quantifying the π-π stacking and T-stacking stabilization. The plotted curves for anti-parallel and T-stacked moieties had similarities to the 'Morse potential energy curve for diatomic molecule'. The minima of the curves corresponded to the most stable stacking distances and related energy values indicated stacking stabilization. Similar DFT studies on syn-parallel systems of 2b corresponded to no π-π stacking stabilization at all. Halogen-halogen interactions had also been observed to stabilize the compounds 2d, 2e and 3d. Nano-structural behaviour of the series of compounds 2a-e and 3a-e were thoroughly investigated.

  15. Parallel-In-Time For Moving Meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Manteuffel, T. A.; Southworth, B.

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is appliedmore » to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.« less

  16. Discontinuous Galerkin Finite Element Method for Parabolic Problems

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.

    2004-01-01

    In this paper, we develop a time and its corresponding spatial discretization scheme, based upon the assumption of a certain weak singularity of parallel ut(t) parallel Lz(omega) = parallel ut parallel2, for the discontinuous Galerkin finite element method for one-dimensional parabolic problems. Optimal convergence rates in both time and spatial variables are obtained. A discussion of automatic time-step control method is also included.

  17. Enhancing PC Cluster-Based Parallel Branch-and-Bound Algorithms for the Graph Coloring Problem

    NASA Astrophysics Data System (ADS)

    Taoka, Satoshi; Takafuji, Daisuke; Watanabe, Toshimasa

    A branch-and-bound algorithm (BB for short) is the most general technique to deal with various combinatorial optimization problems. Even if it is used, computation time is likely to increase exponentially. So we consider its parallelization to reduce it. It has been reported that the computation time of a parallel BB heavily depends upon node-variable selection strategies. And, in case of a parallel BB, it is also necessary to prevent increase in communication time. So, it is important to pay attention to how many and what kind of nodes are to be transferred (called sending-node selection strategy). In this paper, for the graph coloring problem, we propose some sending-node selection strategies for a parallel BB algorithm by adopting MPI for parallelization and experimentally evaluate how these strategies affect computation time of a parallel BB on a PC cluster network.

  18. Embedded Implementation of VHR Satellite Image Segmentation

    PubMed Central

    Li, Chao; Balla-Arabé, Souleymane; Ginhac, Dominique; Yang, Fan

    2016-01-01

    Processing and analysis of Very High Resolution (VHR) satellite images provide a mass of crucial information, which can be used for urban planning, security issues or environmental monitoring. However, they are computationally expensive and, thus, time consuming, while some of the applications, such as natural disaster monitoring and prevention, require high efficiency performance. Fortunately, parallel computing techniques and embedded systems have made great progress in recent years, and a series of massively parallel image processing devices, such as digital signal processors or Field Programmable Gate Arrays (FPGAs), have been made available to engineers at a very convenient price and demonstrate significant advantages in terms of running-cost, embeddability, power consumption flexibility, etc. In this work, we designed a texture region segmentation method for very high resolution satellite images by using the level set algorithm and the multi-kernel theory in a high-abstraction C environment and realize its register-transfer level implementation with the help of a new proposed high-level synthesis-based design flow. The evaluation experiments demonstrate that the proposed design can produce high quality image segmentation with a significant running-cost advantage. PMID:27240370

  19. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  20. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  1. Pinhole diffraction filter

    NASA Technical Reports Server (NTRS)

    Woodgate, B. E.

    1977-01-01

    Multistage diffraction filter consisting of coalined series of pinholes on parallel sheets can be used as nondegradable UV filter. Beam is attenuated as each pinhole diffracts radiation in controlled manner into divergent beam, and following pinhole accepts only small part of that beam.

  2. Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit

    NASA Astrophysics Data System (ADS)

    Tan, Jianbin

    2018-02-01

    According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.

  3. Italy between drinking culture and control policies for alcoholic beverages.

    PubMed

    Allamani, Allaman; Voller, Fabio; Pepe, Pasquale; Baccini, Michela; Massini, Giulia; Cipriani, Francesco

    2014-10-01

    This paper focuses on whether the on-going dramatic decrease in alcohol consumption in Italy, especially of wine, during 1961-2008, was associated with which parallel sociodemographic and economic changes and with alcohol control policies. The study, using both time series (TS) and artificial neural network (ANN)-based analyses documents that its selected sociodemographic and economic factors, and particularly urbanization, had a definite connection with wine consumption decrease, spirits decrease, and the increase in beer consumption over time. On the other hand, control policies showed no effect on the decline in alcohol consumption, since no alcohol control policy existed in Italy between 1960 and 1987. A few policies introduced since 1988 (BAC and sale restrictions during mass events) may have contributed to reducing or to maintaining the on-going reduction. Study limitations are noted and future needed research is suggested.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Godwin, Aaron

    The scope will be limited to analyzing the effect of the EFC within the system and how one improperly installed coupling affects the rest of the HPFL system. The discussion will include normal operations, impaired flow, and service interruptions. Normal operations are defined as two-way flow to buildings. Impaired operations are defined as a building that only has one-way flow being provided to the building. Service interruptions will be when a building does not have water available to it. The project will look at the following aspects of the reliability of the HPFL system: mean time to failure (MTTF) ofmore » EFCs, mean time between failures (MTBF), series system models, and parallel system models. These calculations will then be used to discuss the reliability of the system when one of the couplings fails. Compare the reliability of two-way feeds versus one-way feeds.« less

  5. Bimanual coordination of bowing and fingering in violinists--effects of position changes and string changes.

    PubMed

    Kazennikov, Oleg; Wiesendanger, Mario

    2009-07-01

    Music performance is based on demanding motor control with much practice from young age onward. We have chosen to investigate basic bimanual movements played by violin amateurs and professionals. We posed the question whether position and string changes, two frequent mechanisms, may influence the time interval bowing (right)-fingering (left) coordination. The objective was to measure bimanual coordination, i.e., with or without position changes and string changes. The tendency was that the bimanual coordination was statistically only slightly increased or even unchanged but not perceptible. We conclude that the coordination index is limited up to 100 ms intervals, without any erroneous perception. Although the mentioned position changes and string changes are movements with their timing, they are executed in parallel rather than in series with the bow-fingering coordination.

  6. Internal combustion engine control for series hybrid electric vehicles by parallel and distributed genetic programming/multiobjective genetic algorithms

    NASA Astrophysics Data System (ADS)

    Gladwin, D.; Stewart, P.; Stewart, J.

    2011-02-01

    This article addresses the problem of maintaining a stable rectified DC output from the three-phase AC generator in a series-hybrid vehicle powertrain. The series-hybrid prime power source generally comprises an internal combustion (IC) engine driving a three-phase permanent magnet generator whose output is rectified to DC. A recent development has been to control the engine/generator combination by an electronically actuated throttle. This system can be represented as a nonlinear system with significant time delay. Previously, voltage control of the generator output has been achieved by model predictive methods such as the Smith Predictor. These methods rely on the incorporation of an accurate system model and time delay into the control algorithm, with a consequent increase in computational complexity in the real-time controller, and as a necessity relies to some extent on the accuracy of the models. Two complementary performance objectives exist for the control system. Firstly, to maintain the IC engine at its optimal operating point, and secondly, to supply a stable DC supply to the traction drive inverters. Achievement of these goals minimises the transient energy storage requirements at the DC link, with a consequent reduction in both weight and cost. These objectives imply constant velocity operation of the IC engine under external load disturbances and changes in both operating conditions and vehicle speed set-points. In order to achieve these objectives, and reduce the complexity of implementation, in this article a controller is designed by the use of Genetic Programming methods in the Simulink modelling environment, with the aim of obtaining a relatively simple controller for the time-delay system which does not rely on the implementation of real time system models or time delay approximations in the controller. A methodology is presented to utilise the miriad of existing control blocks in the Simulink libraries to automatically evolve optimal control structures.

  7. Low frequency noise as a control test for spacial solar panels

    NASA Astrophysics Data System (ADS)

    Orsal, B.; Alabedra, R.; Ruas, R.

    1986-07-01

    The present study of low frequency noise in a forward-biased dark solar cell, in order to develop an NDE test method for solar panels, notes that a single cell with a given defect is thus detectable under dark conditions. The test subject was a space solar panel consisting of five cells in parallel and five in series; these cells are of the n(+)-p monocrystalline Si junction type. It is demonstrated that the noise associated with the defective cell is 10-15 times higher than that of a good cell. Replacement of a good cell by a defective one leads to a 30-percent increase in the noise level of the panel as a whole.

  8. Parallel Computation and Visualization of Three-dimensional, Time-dependent, Thermal Convective Flows

    NASA Technical Reports Server (NTRS)

    Wang, P.; Li, P.

    1998-01-01

    A high-resolution numerical study on parallel systems is reported on three-dimensional, time-dependent, thermal convective flows. A parallel implentation on the finite volume method with a multigrid scheme is discussed, and a parallel visualization systemm is developed on distributed systems for visualizing the flow.

  9. Scheme for rapid adjustment of network impedance

    DOEpatents

    Vithayathil, John J.

    1991-01-01

    A static controlled reactance device is inserted in series with an AC electric power transmission line to adjust its transfer impedance. An inductor (reactor) is serially connected with two back-to-back connected thyristors which control the conduction period and hence the effective reactance of the inductor. Additional reactive elements are provided in parallel with the thyristor controlled reactor to filter harmonics and to obtain required range of variable reactance. Alternatively, the static controlled reactance device discussed above may be connected to the secondary winding of a series transformer having its primary winding connected in series to the transmission line. In a three phase transmission system, the controlled reactance device may be connected in delta configuration on the secondary side of the series transformer to eliminate triplen harmonics.

  10. Superconducting coil system and methods of assembling the same

    DOEpatents

    Rajput-Ghoshal, Renuka; Rochford, James H.; Ghoshal, Probir K.

    2016-01-19

    A superconducting magnet apparatus is provided. The superconducting magnet apparatus includes a power source configured to generate a current; a first switch coupled in parallel to the power source; a second switch coupled in series to the power source; a coil coupled in parallel to the first switch and the second switch; and a passive quench protection device coupled to the coil and configured to by-pass the current around the coil and to decouple the coil from the power source when the coil experiences a quench.

  11. ASDTIC control and standardized interface circuits applied to buck, parallel and buck-boost dc to dc power converters

    NASA Technical Reports Server (NTRS)

    Schoenfeld, A. D.; Yu, Y.

    1973-01-01

    Versatile standardized pulse modulation nondissipatively regulated control signal processing circuits were applied to three most commonly used dc to dc power converter configurations: (1) the series switching buck-regulator, (2) the pulse modulated parallel inverter, and (3) the buck-boost converter. The unique control concept and the commonality of control functions for all switching regulators have resulted in improved static and dynamic performance and control circuit standardization. New power-circuit technology was also applied to enhance reliability and to achieve optimum weight and efficiency.

  12. Studies on the π-π stacking features of imidazole units present in a series of 5-amino-1-alkylimidazole-4-carboxamides

    NASA Astrophysics Data System (ADS)

    Ray, Sibdas; Das, Aniruddha

    2015-06-01

    Reaction of 2-ethoxymethyleneamino-2-cyanoacetamide with primary alkyl amines in acetonitrile solvent affords 1-substituted-5-aminoimidazole-4-carboxamides. Single crystal X-ray diffraction studies of these imidazole compounds show that there are both anti-parallel and syn-parallel π-π stackings between two imidazole units in parallel-displaced (PD) conformations and the distance between two π-π stacked imidazole units depends mainly on the anti/ syn-parallel nature and to some extent on the alkyl group attached to N-1 of imidazole; molecules with anti-parallel PD-stacking arrangements of the imidazole units have got vertical π-π stacking distance short enough to impart stabilization whereas the imidazole unit having syn-parallel stacking arrangement have got much larger π-π stacking distances. DFT studies on a pair of anti-parallel imidazole units of such an AICA lead to curves for 'π-π stacking stabilization energy vs. π-π stacking distance' which have got similarity with the 'Morse potential energy diagram for a diatomic molecule' and this affords to find out a minimum π-π stacking distance corresponding to the maximum stacking stabilization energy between the pair of imidazole units. On the other hand, a DFT calculation based curve for 'π-π stacking stabilization energy vs. π-π stacking distance' of a pair of syn-parallel imidazole units is shown to have an exponential nature.

  13. Drop pattern resulting from the breakup of a bidimensional grid of liquid filaments

    NASA Astrophysics Data System (ADS)

    Cuellar, Ingrith; Ravazzoli, Pablo D.; Diez, Javier A.; González, Alejandro G.

    2017-10-01

    A rectangular grid formed by liquid filaments on a partially wetting substrate evolves in a series of breakups leading to arrays of drops with different shapes distributed in a rather regular bidimensional pattern. Our study is focused on the configuration produced when two long parallel filaments of silicone oil, which are placed upon a glass substrate previously coated with a fluorinated solution, are crossed perpendicularly by another pair of long parallel filaments. A remarkable feature of this kind of grids is that there are two qualitatively different types of drops. While one set is formed at the crossing points, the rest are consequence of the breakup of shorter filaments formed between the crossings. Here, we analyze the main geometric features of all types of drops, such as shape of the footprint and contact angle distribution along the drop periphery. The formation of a series of short filaments with similar geometric and physical properties allows us to have simultaneously quasi identical experiments to study the subsequent breakups. We develop a simple hydrodynamic model to predict the number of drops that results from a filament of given initial length and width. This model is able to yield the length intervals corresponding to a small number of drops, and its predictions are successfully compared with the experimental data as well as with numerical simulations of the full Navier-Stokes equation that provide a detailed time evolution of the dewetting motion of the filament till the breakup into drops. Finally, the prediction for finite filaments is contrasted with the existing theories for infinite ones.

  14. An efficient parallel algorithm: Poststack and prestack Kirchhoff 3D depth migration using flexi-depth iterations

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh

    2015-07-01

    This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.

  15. [CMACPAR an modified parallel neuro-controller for control processes].

    PubMed

    Ramos, E; Surós, R

    1999-01-01

    CMACPAR is a Parallel Neurocontroller oriented to real time systems as for example Control Processes. Its characteristics are mainly a fast learning algorithm, a reduced number of calculations, great generalization capacity, local learning and intrinsic parallelism. This type of neurocontroller is used in real time applications required by refineries, hydroelectric centers, factories, etc. In this work we present the analysis and the parallel implementation of a modified scheme of the Cerebellar Model CMAC for the n-dimensional space projection using a mean granularity parallel neurocontroller. The proposed memory management allows for a significant memory reduction in training time and required memory size.

  16. A real-time MPEG software decoder using a portable message-passing library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwong, Man Kam; Tang, P.T. Peter; Lin, Biquan

    1995-12-31

    We present a real-time MPEG software decoder that uses message-passing libraries such as MPL, p4 and MPI. The parallel MPEG decoder currently runs on the IBM SP system but can be easil ported to other parallel machines. This paper discusses our parallel MPEG decoding algorithm as well as the parallel programming environment under which it uses. Several technical issues are discussed, including balancing of decoding speed, memory limitation, 1/0 capacities, and optimization of MPEG decoding components. This project shows that a real-time portable software MPEG decoder is feasible in a general-purpose parallel machine.

  17. A GPU-accelerated implicit meshless method for compressible flows

    NASA Astrophysics Data System (ADS)

    Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng

    2018-05-01

    This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.

  18. Deferred discrimination algorithm (nibbling) for target filter management

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John; Johnson, John L.

    1999-07-01

    A new method of classifying objects is presented. Rather than trying to form the classifier in one step or in one training algorithm, it is done in a series of small steps, or nibbles. This leads to an efficient and versatile system that is trained in series with single one-shot examples but applied in parallel, is implemented with single layer perceptrons, yet maintains its fully sequential hierarchical structure. Based on the nibbling algorithm, a basic new method of target reference filter management is described.

  19. Active energy recovery clamping circuit to improve the performance of power converters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitaker, Bret; Barkley, Adam

    2017-05-09

    A regenerative clamping circuit for a power converter using clamping diodes to transfer charge to a clamping capacitor and a regenerative converter to transfer charge out of the clamping capacitor back to the power supply input connection. The regenerative converter uses a switch connected to the midpoint of a series connected inductor and capacitor. The ends of the inductor and capacitor series are connected across the terminals of the power supply to be in parallel with the power supply.

  20. Workshop on Solid State Switches for Pulsed Power, held January 12-14, 1983 at Tamarron, Colorado

    DTIC Science & Technology

    1983-05-31

    of its anticipated scalabil- ity. However, the projected performance of other types of dis- crete switches made their continued exploration and...linking of "asynchronous AC power grids. Some present installations arid projected increases are showr. in Table 2. A new commercial power application...Average Power 62.5 KW 160 KW Device RBDT (RSR) T60R SCR 2N3873 Arra , 6 Series 10 Parallel-20 Series Table 18. Applications of solid state pulse

  1. LightForce Photon-Pressure Collision Avoidance: Updated Efficiency Analysis Utilizing a Highly Parallel Simulation Approach

    NASA Technical Reports Server (NTRS)

    Stupl, Jan; Faber, Nicolas; Foster, Cyrus; Yang, Fan Yang; Nelson, Bron; Aziz, Jonathan; Nuttall, Andrew; Henze, Chris; Levit, Creon

    2014-01-01

    This paper provides an updated efficiency analysis of the LightForce space debris collision avoidance scheme. LightForce aims to prevent collisions on warning by utilizing photon pressure from ground based, commercial off the shelf lasers. Past research has shown that a few ground-based systems consisting of 10 kilowatt class lasers directed by 1.5 meter telescopes with adaptive optics could lower the expected number of collisions in Low Earth Orbit (LEO) by an order of magnitude. Our simulation approach utilizes the entire Two Line Element (TLE) catalogue in LEO for a given day as initial input. Least-squares fitting of a TLE time series is used for an improved orbit estimate. We then calculate the probability of collision for all LEO objects in the catalogue for a time step of the simulation. The conjunctions that exceed a threshold probability of collision are then engaged by a simulated network of laser ground stations. After those engagements, the perturbed orbits are used to re-assess the probability of collision and evaluate the efficiency of the system. This paper describes new simulations with three updated aspects: 1) By utilizing a highly parallel simulation approach employing hundreds of processors, we have extended our analysis to a much broader dataset. The simulation time is extended to one year. 2) We analyze not only the efficiency of LightForce on conjunctions that naturally occur, but also take into account conjunctions caused by orbit perturbations due to LightForce engagements. 3) We use a new simulation approach that is regularly updating the LightForce engagement strategy, as it would be during actual operations. In this paper we present our simulation approach to parallelize the efficiency analysis, its computational performance and the resulting expected efficiency of the LightForce collision avoidance system. Results indicate that utilizing a network of four LightForce stations with 20 kilowatt lasers, 85% of all conjunctions with a probability of collision Pc > 10 (sup -6) can be mitigated.

  2. Detecting leaf pulvinar movements on NDVI time series of desert trees: a new approach for water stress detection.

    PubMed

    Chávez, Roberto O; Clevers, Jan G P W; Verbesselt, Jan; Naulin, Paulette I; Herold, Martin

    2014-01-01

    Heliotropic leaf movement or leaf 'solar tracking' occurs for a wide variety of plants, including many desert species and some crops. This has an important effect on the canopy spectral reflectance as measured from satellites. For this reason, monitoring systems based on spectral vegetation indices, such as the normalized difference vegetation index (NDVI), should account for heliotropic movements when evaluating the health condition of such species. In the hyper-arid Atacama Desert, Northern Chile, we studied seasonal and diurnal variations of MODIS and Landsat NDVI time series of plantation stands of the endemic species Prosopis tamarugo Phil., subject to different levels of groundwater depletion. As solar irradiation increased during the day and also during the summer, the paraheliotropic leaves of Tamarugo moved to an erectophile position (parallel to the sun rays) making the NDVI signal to drop. This way, Tamarugo stands with no water stress showed a positive NDVI difference between morning and midday (ΔNDVI mo-mi) and between winter and summer (ΔNDVI W-S). In this paper, we showed that the ΔNDVI mo-mi of Tamarugo stands can be detected using MODIS Terra and Aqua images, and the ΔNDVI W-S using Landsat or MODIS Terra images. Because pulvinar movement is triggered by changes in cell turgor, the effects of water stress caused by groundwater depletion can be assessed and monitored using ΔNDVI mo-mi and ΔNDVI W-S. For an 11-year time series without rainfall events, Landsat ΔNDVI W-S of Tamarugo stands showed a positive linear relationship with cumulative groundwater depletion. We conclude that both ΔNDVI mo-mi and ΔNDVI W-S have potential to detect early water stress of paraheliotropic vegetation.

  3. Detecting Leaf Pulvinar Movements on NDVI Time Series of Desert Trees: A New Approach for Water Stress Detection

    PubMed Central

    Chávez, Roberto O.; Clevers, Jan G. P. W.; Verbesselt, Jan; Naulin, Paulette I.; Herold, Martin

    2014-01-01

    Heliotropic leaf movement or leaf ‘solar tracking’ occurs for a wide variety of plants, including many desert species and some crops. This has an important effect on the canopy spectral reflectance as measured from satellites. For this reason, monitoring systems based on spectral vegetation indices, such as the normalized difference vegetation index (NDVI), should account for heliotropic movements when evaluating the health condition of such species. In the hyper-arid Atacama Desert, Northern Chile, we studied seasonal and diurnal variations of MODIS and Landsat NDVI time series of plantation stands of the endemic species Prosopis tamarugo Phil., subject to different levels of groundwater depletion. As solar irradiation increased during the day and also during the summer, the paraheliotropic leaves of Tamarugo moved to an erectophile position (parallel to the sun rays) making the NDVI signal to drop. This way, Tamarugo stands with no water stress showed a positive NDVI difference between morning and midday (ΔNDVImo-mi) and between winter and summer (ΔNDVIW-S). In this paper, we showed that the ΔNDVImo-mi of Tamarugo stands can be detected using MODIS Terra and Aqua images, and the ΔNDVIW-S using Landsat or MODIS Terra images. Because pulvinar movement is triggered by changes in cell turgor, the effects of water stress caused by groundwater depletion can be assessed and monitored using ΔNDVImo-mi and ΔNDVIW-S. For an 11-year time series without rainfall events, Landsat ΔNDVIW-S of Tamarugo stands showed a positive linear relationship with cumulative groundwater depletion. We conclude that both ΔNDVImo-mi and ΔNDVIW-S have potential to detect early water stress of paraheliotropic vegetation. PMID:25188305

  4. Fuel cell generator

    DOEpatents

    Isenberg, Arnold O.

    1983-01-01

    High temperature solid oxide electrolyte fuel cell generators which allow controlled leakage among plural chambers in a sealed housing. Depleted oxidant and fuel are directly reacted in one chamber to combust remaining fuel and preheat incoming reactants. The cells are preferably electrically arranged in a series-parallel configuration.

  5. Get the LED Out.

    ERIC Educational Resources Information Center

    Jewett, John W., Jr.

    1991-01-01

    Describes science demonstrations with light-emitting diodes that include electrical concepts of resistance, direct and alternating current, sine wave versus square wave, series and parallel circuits, and Faraday's Law; optics concepts of real and virtual images, photoresistance, and optical communication; and modern physics concepts of spectral…

  6. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  7. SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation

    NASA Technical Reports Server (NTRS)

    Steinman, Jeff S.

    1992-01-01

    Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.

  8. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  9. A model for optimizing file access patterns using spatio-temporal parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boonthanome, Nouanesengsy; Patchett, John; Geveci, Berk

    2013-01-01

    For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible filemore » access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.« less

  10. Time-dependent density-functional theory in massively parallel computer architectures: the octopus project

    NASA Astrophysics Data System (ADS)

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A.; Oliveira, Micael J. T.; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G.; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A. L.

    2012-06-01

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  11. Time-dependent density-functional theory in massively parallel computer architectures: the OCTOPUS project.

    PubMed

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A; Oliveira, Micael J T; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A L

    2012-06-13

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  12. Full Wave Analysis of RF Signal Attenuation in a Lossy Rough Surface Cave using a High Order Time Domain Vector Finite Element Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pingenot, J; Rieben, R; White, D

    2005-10-31

    We present a computational study of signal propagation and attenuation of a 200 MHz planar loop antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The numerical technique is first verified against theoretical results for a planar loop antenna in a smooth lossy cave. The simulation is then performed for a series of random rough surface meshes in ordermore » to generate statistical data for the propagation and attenuation properties of the antenna in a cave environment. Results for the mean and variance of the power spectral density of the electric field are presented and discussed.« less

  13. SparkClouds: visualizing trends in tag clouds.

    PubMed

    Lee, Bongshin; Riche, Nathalie Henry; Karlson, Amy K; Carpendale, Sheelash

    2010-01-01

    Tag clouds have proliferated over the web over the last decade. They provide a visual summary of a collection of texts by visually depicting the tag frequency by font size. In use, tag clouds can evolve as the associated data source changes over time. Interesting discussions around tag clouds often include a series of tag clouds and consider how they evolve over time. However, since tag clouds do not explicitly represent trends or support comparisons, the cognitive demands placed on the person for perceiving trends in multiple tag clouds are high. In this paper, we introduce SparkClouds, which integrate sparklines into a tag cloud to convey trends between multiple tag clouds. We present results from a controlled study that compares SparkClouds with two traditional trend visualizations—multiple line graphs and stacked bar charts—as well as Parallel Tag Clouds. Results show that SparkClouds ability to show trends compares favourably to the alternative visualizations.

  14. Steinberg ``AUDIOMAPS'' Music Appreciation-Via-Understanding: Special-Relativity + Expectations ``Quantum-Theory'': a Quantum-ACOUSTO/MUSICO-Dynamics (QA/MD)

    NASA Astrophysics Data System (ADS)

    Fender, Lee; Steinberg, Russell; Siegel, Edward Carl-Ludwig

    2011-03-01

    Steinberg wildly popular "AUDIOMAPS" music enjoyment/appreciation-via-understanding methodology, versus art, music-dynamics evolves, telling a story in (3+1)-dimensions: trails, frames, timbres, + dynamics amplitude vs. music-score time-series (formal-inverse power-spectrum) surprisingly closely parallels (3+1)-dimensional Einstein(1905) special-relativity "+" (with its enjoyment-expectations) a manifestation of quantum-theory expectation-values, together a music quantum-ACOUSTO/MUSICO-dynamics(QA/MD). Analysis via Derrida deconstruction enabled Siegel-Baez "Category-Semantics" "FUZZYICS"="CATEGORYICS ('TRIZ") Aristotle SoO DEduction , irrespective of Boon-Klimontovich vs. Voss-Clark[PRL(77)] music power-spectrum analysis sampling-time/duration controversy: part versus whole, shows QA/MD reigns supreme as THE music appreciation-via-analysis tool for the listener in musicology!!! Connection to Deutsch-Hartmann-Levitin[This is Your Brain on Music, (06)] brain/mind-barrier brain/mind-music connection is subtle/compelling/immediate!!!

  15. Electromigration model for the prediction of lifetime based on the failure unit statistics in aluminum metallization

    NASA Astrophysics Data System (ADS)

    Park, Jong Ho; Ahn, Byung Tae

    2003-01-01

    A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.

  16. GIAnT - Generic InSAR Analysis Toolbox

    NASA Astrophysics Data System (ADS)

    Agram, P.; Jolivet, R.; Riel, B. V.; Simons, M.; Doin, M.; Lasserre, C.; Hetland, E. A.

    2012-12-01

    We present a computing framework for studying the spatio-temporal evolution of ground deformation from interferometric synthetic aperture radar (InSAR) data. Several open-source tools including Repeat Orbit Interferometry PACkage (ROI-PAC) and InSAR Scientific Computing Environment (ISCE) from NASA-JPL, and Delft Object-oriented Repeat Interferometric Software (DORIS), have enabled scientists to generate individual interferograms from raw radar data with relative ease. Numerous computational techniques and algorithms that reduce phase information from multiple interferograms to a deformation time-series have been developed and verified over the past decade. However, the sharing and direct comparison of products from multiple processing approaches has been hindered by - 1) absence of simple standards for sharing of estimated time-series products, 2) use of proprietary software tools with license restrictions and 3) the closed source nature of the exact implementation of many of these algorithms. We have developed this computing framework to address all of the above issues. We attempt to take the first steps towards creating a community software repository for InSAR time-series analysis. To date, we have implemented the short baseline subset algorithm (SBAS), NSBAS and multi-scale interferometric time-series (MInTS) in this framework and the associated source code is included in the GIAnT distribution. A number of the associated routines have been optimized for performance and scalability with large data sets. Some of the new features in our processing framework are - 1) the use of daily solutions from continuous GPS stations to correct for orbit errors, 2) the use of meteorological data sets to estimate the tropospheric delay screen and 3) a data-driven bootstrapping approach to estimate the uncertainties associated with estimated time-series products. We are currently working on incorporating tidal load corrections for individual interferograms and propagation of noise covariance models through the processing chain for robust estimation of uncertainties in the deformation estimates. We will demonstrate the ease of use of our framework with results ranging from regional scale analysis around Long Valley, CA and Parkfield, CA to continental scale analysis in Western South America. We will also present preliminary results from a new time-series approach that simultaneously estimates deformation over the complete spatial domain at all time epochs on a distributed computing platform. GIAnT has been developed entirely using open source tools and uses Python as the underlying platform. We build on the extensive numerical (NumPy) and scientific (SciPy) computing Python libraries to develop an object-oriented, flexible and modular framework for time-series InSAR applications. The toolbox is currently configured to work with outputs from ROI-PAC, ISCE and DORIS, but can easily be extended to support products from other SAR/InSAR processors. The toolbox libraries include support for hierarchical data format (HDF5) memory mapped files, parallel processing with Python's multi-processing module and support for many convex optimization solvers like CSDP, CVXOPT etc. An extensive set of routines to deal with ASCII and XML files has also been included for controlling the processing parameters.

  17. Fifty Views of Cooperative Education.

    ERIC Educational Resources Information Center

    Hunt, Donald C.

    A series of opinions on many facets of the administration of cooperative education programming is presented. Part One reviews the philosophy of cooperative education including Lawrence Canjar's "convert" speech, a comparison of experiential and cooperative education, and discussions of parallel programs. In Part Two employers discuss cooperative…

  18. Effect of the depth base along the vertical on the electrical parameters of a vertical parallel silicon solar cell in open and short circuit

    NASA Astrophysics Data System (ADS)

    Sahin, Gokhan; Kerimli, Genber

    2018-03-01

    This article presented a modeling study of effect of the depth base initiating on vertical parallel silicon solar cell's photovoltaic conversion efficiency. After the resolution of the continuity equation of excess minority carriers, we calculated the electrical parameters such as the photocurrent density, the photovoltage, series resistance and shunt resistances, diffusion capacitance, electric power, fill factor and the photovoltaic conversion efficiency. We determined the maximum electric power, the operating point of the solar cell and photovoltaic conversion efficiency according to the depth z in the base. We showed that the photocurrent density decreases with the depth z. The photovoltage decreased when the depth base increases. Series and shunt resistances were deduced from electrical model and were influenced and the applied the depth base. The capacity decreased with the depth z of the base. We had studied the influence of the variation of the depth z on the electrical parameters in the base.

  19. On the Takayanagi principle for the shape memory effect and thermomechanical behaviors in polymers with multi-phases

    NASA Astrophysics Data System (ADS)

    Lu, Haibao; Yu, Kai; Huang, Wei Min; Leng, Jinsong

    2016-12-01

    We present an explicit model to study the mechanics and physics of the shape memory effect (SME) in polymers based on the Takayanagi principle. The molecular structural characteristics and elastic behavior of shape memory polymers (SMPs) with multi-phases are investigated in terms of the thermomechanical properties of the individual components, of which the contributions are combined by using Takayanagi’s series-parallel model and parallel-series model, respectively. After that, Boltzmann superposition principle is employed to couple the multi-SME, elastic modulus parameter (E) and temperature parameter (T) in SMPs. Furthermore, the extended Takayanagi model is proposed to separate the plasticizing effect and physical swelling effect on the thermo-/chemo-responsive SME in polymers and then compared with the available experimental data reported in the literature. This study is expected to provide a powerful simulation tool for modeling and experimental substantiation of the mechanics and working mechanism of SME in polymers.

  20. Magnetic-Flux-Compensated Voltage Divider

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.

    2005-01-01

    A magnetic-flux-compensated voltage-divider circuit has been proposed for use in measuring the true potential across a component that is exposed to large, rapidly varying electric currents like those produced by lightning strikes. An example of such a component is a lightning arrester, which is typically exposed to currents of the order of tens of kiloamperes, having rise times of the order of hundreds of nanoseconds. Traditional voltage-divider circuits are not designed for magnetic-flux-compensation: They contain uncompensated loops having areas large enough that the transient magnetic fluxes associated with large transient currents induce spurious voltages large enough to distort voltage-divider outputs significantly. A drawing of the proposed circuit was not available at the time of receipt of information for this article. What is known from a summary textual description is that the proposed circuit would contain a total of four voltage dividers: There would be two mixed dividers in parallel with each other and with the component of interest (e.g., a lightning arrester), plus two mixed dividers in parallel with each other and in series with the component of interest in the same plane. The electrical and geometric configuration would provide compensation for induced voltages, including those attributable to asymmetry in the volumetric density of the lightning or other transient current, canceling out the spurious voltages and measuring the true voltage across the component.

  1. Fast 2D flood modelling using GPU technology - recent applications and new developments

    NASA Astrophysics Data System (ADS)

    Crossley, Amanda; Lamb, Rob; Waller, Simon; Dunning, Paul

    2010-05-01

    In recent years there has been considerable interest amongst scientists and engineers in exploiting the potential of commodity graphics hardware for desktop parallel computing. The Graphics Processing Units (GPUs) that are used in PC graphics cards have now evolved into powerful parallel co-processors that can be used to accelerate the numerical codes used for floodplain inundation modelling. We report in this paper on experience over the past two years in developing and applying two dimensional (2D) flood inundation models using GPUs to achieve significant practical performance benefits. Starting with a solution scheme for the 2D diffusion wave approximation to the 2D Shallow Water Equations (SWEs), we have demonstrated the capability to reduce model run times in ‘real-world' applications using GPU hardware and programming techniques. We then present results from a GPU-based 2D finite volume SWE solver. A series of numerical test cases demonstrate that the model produces outputs that are accurate and consistent with reference results published elsewhere. In comparisons conducted for a real world test case, the GPU-based SWE model was over 100 times faster than the CPU version. We conclude with some discussion of practical experience in using the GPU technology for flood mapping applications, and for research projects investigating use of Monte Carlo simulation methods for the analysis of uncertainty in 2D flood modelling.

  2. Parallel Fortran-MPI software for numerical inversion of the Laplace transform and its application to oscillatory water levels in groundwater environments

    USGS Publications Warehouse

    Zhan, X.

    2005-01-01

    A parallel Fortran-MPI (Message Passing Interface) software for numerical inversion of the Laplace transform based on a Fourier series method is developed to meet the need of solving intensive computational problems involving oscillatory water level's response to hydraulic tests in a groundwater environment. The software is a parallel version of ACM (The Association for Computing Machinery) Transactions on Mathematical Software (TOMS) Algorithm 796. Running 38 test examples indicated that implementation of MPI techniques with distributed memory architecture speedups the processing and improves the efficiency. Applications to oscillatory water levels in a well during aquifer tests are presented to illustrate how this package can be applied to solve complicated environmental problems involved in differential and integral equations. The package is free and is easy to use for people with little or no previous experience in using MPI but who wish to get off to a quick start in parallel computing. ?? 2004 Elsevier Ltd. All rights reserved.

  3. Modelling and experimental evaluation of parallel connected lithium ion cells for an electric vehicle battery system

    NASA Astrophysics Data System (ADS)

    Bruen, Thomas; Marco, James

    2016-04-01

    Variations in cell properties are unavoidable and can be caused by manufacturing tolerances and usage conditions. As a result of this, cells connected in series may have different voltages and states of charge that limit the energy and power capability of the complete battery pack. Methods of removing this energy imbalance have been extensively reported within literature. However, there has been little discussion around the effect that such variation has when cells are connected electrically in parallel. This work aims to explore the impact of connecting cells, with varied properties, in parallel and the issues regarding energy imbalance and battery management that may arise. This has been achieved through analysing experimental data and a validated model. The main results from this study highlight that significant differences in current flow can occur between cells within a parallel stack that will affect how the cells age and the temperature distribution within the battery assembly.

  4. INVITED TOPICAL REVIEW: Parallel magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Larkman, David J.; Nunes, Rita G.

    2007-04-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed.

  5. Efficiency Analysis of the Parallel Implementation of the SIMPLE Algorithm on Multiprocessor Computers

    NASA Astrophysics Data System (ADS)

    Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.

    2017-12-01

    This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.

  6. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi

    2015-08-24

    This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solversmore » that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.« less

  7. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi

    2015-09-02

    This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared tomore » finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.« less

  8. Parallels among the ``music scores'' of solar cycles, space weather and Earth's climate

    NASA Astrophysics Data System (ADS)

    Kolláth, Zoltán; Oláh, Katalin; van Driel-Gesztelyi, Lidia

    2012-07-01

    Solar variability and its effects on the physical variability of our (space) environment produces complex signals. In the indicators of solar activity at least four independent cyclic components can be identified, all of them with temporal variations in their timescales. Time-frequency distributions (see Kolláth & Oláh 2009) are perfect tools to disclose the ``music scores'' in these complex time series. Special features in the time-frequency distributions, like frequency splitting, or modulations on different timescales provide clues, which can reveal similar trends among different indices like sunspot numbers, interplanetary magnetic field strength in the Earth's neighborhood and climate data. On the pseudo-Wigner Distribution (PWD) the frequency splitting of all the three main components (the Gleissberg and Schwabe cycles, and an ~5.5 year signal originating from cycle asymmetry, i.e. the Waldmeier effect) can be identified as a ``bubble'' shaped structure after 1950. The same frequency splitting feature can also be found in the heliospheric magnetic field data and the microwave radio flux.

  9. Exploiting Symmetry on Parallel Architectures.

    NASA Astrophysics Data System (ADS)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  10. Scalable Track Detection in SAR CCD Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, James G; Quach, Tu-Thach

    Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images ta ken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are often too simple to capture natural track features such as continuity and parallelism. We present a simple convolutional network architecture consisting of a series of 3-by-3 convolutions to detect tracks. The network is trained end-to-end to learn natural track features entirely from data. The network is computationally efficient and improves the F-score on a standard dataset to 0.988,more » up fr om 0.907 obtained by the current state-of-the-art method.« less

  11. Highly Efficient Cooperative Catalysis by Co III (Porphyrin) Pairs in Interpenetrating Metal-Organic Frameworks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Zekai; Zhang, Zhi-Ming; Chen, Yu-Sheng

    2016-12-02

    A series of porous twofold interpenetrated In-Co III(porphyrin) metal–organic frameworks (MOFs) were constructed by in situ metalation of porphyrin bridging ligands and used as efficient cooperative catalysts for the hydration of terminal alkynes. The twofold interpenetrating structure brings adjacent Co III(porphyrins) in the two networks parallel to each other with a distance of about 8.8 Å, an ideal distance for the simultaneous activation of both substrates in alkyne hydration reactions. As a result, the In-Co III(porphyrin) MOFs exhibit much higher (up to 38 times) catalytic activity than either homogeneous catalysts or MOF controls with isolated Co III(porphyrin) centers, thus highlightingmore » the potential application of MOFs in cooperative catalysis.« less

  12. OpenCL Implementation of NeuroIsing

    NASA Astrophysics Data System (ADS)

    Zapart, C. A.

    Recent advances in graphics card hardware combined with anintroduction of the OpenCL standard promise to accelerate numerical simulations across diverse scientific disciplines. One such field benefiting from new hardware/software paradigms is econophysics. The paper describes an OpenCL implementation of a selected econophysics model: NeuroIsing, which has been designed to execute in parallel on a vendor-independent graphics card. Originally introduced in the paper [C.~A.~Zapart, ``Econophysics in Financial Time Series Prediction'', PhD thesis, Graduate University for Advanced Studies, Japan (2009)], at first it was implemented on a CELL processor running inside a SONY PS3 games console. The NeuroIsing framework can be applied to predicting and trading foreign exchange as well as stock market index futures.

  13. Method and system for gathering a library of response patterns for sensor arrays

    DOEpatents

    Zaromb, Solomon

    1992-01-01

    A method of gathering a library of response patterns for one or more sensor arrays used in the detection and identification of chemical components in a fluid includes the steps of feeding samples of fluid with time-spaced separation of known components to the sensor arrays arranged in parallel or series configurations. Modifying elements such as heating filaments of differing materials operated at differing temperatures are included in the configurations to duplicate operational modes designed into the portable detection systems with which the calibrated sensor arrays are to be used. The response patterns from the known components are collected into a library held in the memory of a microprocessor for comparison with the response patterns of unknown components.

  14. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics.

    PubMed

    Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2012-09-25

    Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.

  15. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics

    PubMed Central

    2012-01-01

    Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363

  16. Infrared technology for satellite power conversion. [antenna arrays and bolometers

    NASA Technical Reports Server (NTRS)

    Campbell, D. P.; Gouker, M. A.; Gallagher, J. J.

    1984-01-01

    Successful fabrication of bismuth bolometers led to the observation of antenna action rom array elements. Fabrication of the best antennas arrays was made more facile with finding that increased argon flow during the dc sputtering produced more uniform bismuth films and bonding to antennas must be done with the substrate temperaure below 100 C. Higher temperatures damaged the bolometers. During the testing of the antennas, it was found that the use of a quasi-optical system provided a uniform radiation field. Groups of antennas were bonded in series and in parallel with the parallel configuration showing the greater response.

  17. Suppressing Thermal Energy Drift in the LLNL Flash X-Ray Accelerator Using Linear Disk Resistor Stacks

    DTIC Science & Technology

    2011-06-01

    induction accelerator with a voltage output of 18MeV at a current of 3kA. The electron beam is focused onto a tantalum target to produce X-rays. The... capacitors in each bank, half of which are charged in parallel positively, and the other half are negatively charged in parallel. The charge voltage can...be varied from ±30kV to ±40kV. The Marx capacitors are fired in series into the Blumleins with up to 400kV 2µS output. Figure 1 FXR Pulsed Power

  18. Advanced propulsion system concept for hybrid vehicles

    NASA Technical Reports Server (NTRS)

    Bhate, S.; Chen, H.; Dochat, G.

    1980-01-01

    A series hybrid system, utilizing a free piston Stirling engine with a linear alternator, and a parallel hybrid system, incorporating a kinematic Stirling engine, are analyzed for various specified reference missions/vehicles ranging from a small two passenger commuter vehicle to a van. Parametric studies for each configuration, detail tradeoff studies to determine engine, battery and system definition, short term energy storage evaluation, and detail life cycle cost studies were performed. Results indicate that the selection of a parallel Stirling engine/electric, hybrid propulsion system can significantly reduce petroleum consumption by 70 percent over present conventional vehicles.

  19. Telecommunication service markets through the year 2000 in relation to millimeter wave satellite systems

    NASA Technical Reports Server (NTRS)

    Stevenson, S. M.

    1979-01-01

    NASA is currently conducting a series of millimeter wave satellite system market studies to develop 30/20 GHz satellite system concepts that have commercial potential. Four contractual efforts were undertaken: two parallel and independent system studies and two parallel and independent market studies. The marketing efforts are focused on forecasting the total domestic demand for long haul telecommunications services for the 1980-2000 period. Work completed to date and reported in this paper include projections of: geographical distribution of traffic; traffic volume as a function of urban area size; and user identification and forecasted demand.

  20. Wave Number Selection for Incompressible Parallel Jet Flows Periodic in Space

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    1997-01-01

    The temporal instability of a spatially periodic parallel flow of an incompressible inviscid fluid for various jet velocity profiles is studied numerically using Floquet Analysis. The transition matrix at the end of a period is evaluated by direct numerical integration. For verification, a method based on approximating a continuous function by a series of step functions was used. Unstable solutions were found only over a limited range of wave numbers and have a band type structure. The results obtained are analogous to the behavior observed in systems exhibiting complexity at the edge of order and chaos.

  1. Space shuttle system program definition. Volume 2: Technical report

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Phase B Extension of the Space Shuttle System Program Definition study was redirected to apply primary effort to consideration of space shuttle systems utilizing either recoverable pressure fed liquids or expendable solid rocket motor boosters. Two orbiter configurations were to be considered, one with a 15x60 foot payload bay with a 65,000 lb, due East, up-payload capability and the other with a 14x45 payload bay with 45,000 lb, of due East, up-payload. Both were to use three SSME engines with 472,000 lb of vacuum thrust each. Parallel and series burn ascent modes were to be considered for the launch configurations of primary interest. A recoverable pump-fed booster is included in the study in a series burn configuration with the 15x60 orbiter. To explore the potential of the swing engine orbiter configuration in the pad abort case, it is included in the study matrix in two launch configurations, a series burn pressure fed BRB and a parallel burn SRM. The resulting matrix of configuration options is shown. The principle objectives of this study are to evaluate the cost and technical differences between the liquid and solid propellant booster systems and to assess the development and operational cost savings available with a smaller orbiter.

  2. Filtering versus parallel processing in RSVP tasks.

    PubMed

    Botella, J; Eriksen, C W

    1992-04-01

    An experiment of McLean, D. E. Broadbent, and M. H. P. Broadbent (1983) using rapid serial visual presentation (RSVP) was replicated. A series of letters in one of 5 colors was presented, and the subject was asked to identify the letter that appeared in a designated color. There were several innovations in our procedure, the most important of which was the use of a response menu. After each trial, the subject was presented with 7 candidate letters from which to choose his/her response. In three experimental conditions, the target, the letter following the target, and all letters other than the target were, respectively, eliminated from the menu. In other conditions, the stimulus list was manipulated by repeating items in the series, repeating the color of successive items, or even eliminating the target color. By means of these manipulations, we were able to determine more precisely the information that subjects had obtained from the presentation of the stimulus series. Although we replicated the results of McLean et al. (1983), the more extensive information that our procedure produced was incompatible with the serial filter model that McLean et al. had used to describe their data. Overall, our results were more compatible with a parallel-processing account. Furthermore, intrusion errors are apparently not only a perceptual phenomenon but a memory problem as well.

  3. A derivation and scalable implementation of the synchronous parallel kinetic Monte Carlo method for simulating long-time dynamics

    NASA Astrophysics Data System (ADS)

    Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2017-10-01

    Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.

  4. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  5. Multi-MA reflex triode research.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swanekamp, Stephen Brian; Commisso, Robert J.; Weber, Bruce V.

    The Reflex Triode can efficiently produce and transmit medium energy (10-100 keV) x-rays. Perfect reflexing through thin converter can increase transmission of 10-100 keV x-rays. Gamble II experiment at 1 MV, 1 MA, 60 ns - maximum dose with 25 micron tantalum. Electron orbits depend on the foil thickness. Electron orbits from LSP used to calculate path length inside tantalum. A simple formula predicts the optimum foil thickness for reflexing converters. The I(V) characteristics of the diode can be understood using simple models. Critical current dominates high voltage triodes, bipolar current is more important at low voltage. Higher current (2.5more » MA), lower voltage (250 kV) triodes are being tested on Saturn at Sandia. Small, precise, anode-cathode gaps enable low impedance operation. Sample Saturn results at 2.5 MA, 250 kV. Saturn dose rate could be about two times greater. Cylindrical triode may improve x-ray transmission. Cylindrical triode design will be tested at 1/2 scale on Gamble II. For higher current on Saturn, could use two cylindrical triodes in parallel. 3 triodes in parallel require positive polarity operation. 'Triodes in series' would improve matching low impedance triodes to generator. Conclusions of this presentation are: (1) Physics of reflex triodes from Gamble II experiments (1 MA, 1 MV) - (a) Converter thickness 1/20 of CSDA range optimizes x-ray dose; (b) Simple model based on electron orbits predicts optimum thickness from LSP/ITS calculations and experiment; (c) I(V) analysis: beam dynamics different between 1 MV and 250 kV; (2) Multi-MA triode experiments on Saturn (2.5 MA, 250 kV) - (a) Polarity inversion in vacuum, (b) No-convolute configuration, accurate gap settings, (c) About half of current produces useful x-rays, (d) Cylindrical triode one option to increase x-ray transmission; and (3) Potential to increase Saturn current toward 10 MA, maintaining voltage and outer diameter - (a) 2 (or 3) cylindrical triodes in parallel, (b) Triodes in series to improve matching, (c) These concepts will be tested first on Gamble II.« less

  6. A Short-Circuit Method for Networks.

    ERIC Educational Resources Information Center

    Ong, P. P.

    1983-01-01

    Describes a method of network analysis that allows avoidance of Kirchoff's Laws (providing the network is symmetrical) by reduction to simple series/parallel resistances. The method can be extended to symmetrical alternating current, capacitance or inductance if corresponding theorems are used. Symmetric cubic network serves as an example. (JM)

  7. Becoming and Disappearing: Between Art, Architecture and Research

    ERIC Educational Resources Information Center

    Beinart, Katy

    2014-01-01

    This paper examines some parallels and differences in pursuing practice-based research in art or architecture. Using a series of different headlines and examples, I examine the potential of working "between" art and architecture, which I argue could generate new, hybridised methodologies of practice through interrogating the…

  8. A Topological Model for Parallel Algorithm Design

    DTIC Science & Technology

    1991-09-01

    by Charles Babbage in 1842-(250): When a long series of identical computations is to be performed, ... the machine can ... give several results at...Papers. Addison-Wesley, Reading, MA, 1964. 250. P. Morrison and E. Morrison. Charles Babbagc and His Calculating Engincs. Dover, New York, 1961. 251

  9. Computerized Investigations of Battery Characteristics.

    ERIC Educational Resources Information Center

    Hinrichsen, P. F.

    2001-01-01

    Uses a computer interface to measure terminal voltage versus current characteristic of a variety of batteries, their series and parallel combinations, and the variation with discharge. The concept of an internal resistance demonstrates that current flows through the battery determine the efficiency and serve to introduce Thevenin's theorem.…

  10. Apparatus for Teaching Physics.

    ERIC Educational Resources Information Center

    Gottlieb, Herbert H., Ed.

    1981-01-01

    Describes: (1) a seven-segment LED display successfully used as an "illuminated" object for introductory optics experiments and advantages for its use; (2) a series/parallel circuit demonstration especially useful in introductory courses for nonmajors; and (3) a method for igniting a sodium arc lamp with an incandescent lamp. (JN)

  11. Two-Stage Series-Resonant Inverter

    NASA Technical Reports Server (NTRS)

    Stuart, Thomas A.

    1994-01-01

    Two-stage inverter includes variable-frequency, voltage-regulating first stage and fixed-frequency second stage. Lightweight circuit provides regulated power and is invulnerable to output short circuits. Does not require large capacitor across ac bus, like parallel resonant designs. Particularly suitable for use in ac-power-distribution system of aircraft.

  12. A Parallel Framework with Block Matrices of a Discrete Fourier Transform for Vector-Valued Discrete-Time Signals.

    PubMed

    Soto-Quiros, Pablo

    2015-01-01

    This paper presents a parallel implementation of a kind of discrete Fourier transform (DFT): the vector-valued DFT. The vector-valued DFT is a novel tool to analyze the spectra of vector-valued discrete-time signals. This parallel implementation is developed in terms of a mathematical framework with a set of block matrix operations. These block matrix operations contribute to analysis, design, and implementation of parallel algorithms in multicore processors. In this work, an implementation and experimental investigation of the mathematical framework are performed using MATLAB with the Parallel Computing Toolbox. We found that there is advantage to use multicore processors and a parallel computing environment to minimize the high execution time. Additionally, speedup increases when the number of logical processors and length of the signal increase.

  13. A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

  14. Non-Cartesian Parallel Imaging Reconstruction

    PubMed Central

    Wright, Katherine L.; Hamilton, Jesse I.; Griswold, Mark A.; Gulani, Vikas; Seiberlich, Nicole

    2014-01-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be employed to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the non-homogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian GRAPPA, and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499

  15. Externally Calibrated Parallel Imaging for 3D Multispectral Imaging Near Metallic Implants Using Broadband Ultrashort Echo Time Imaging

    PubMed Central

    Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Reeder, Scott B.

    2017-01-01

    Purpose To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. Theory and Methods A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Results Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. Conclusion A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. PMID:27403613

  16. Real-time implementations of image segmentation algorithms on shared memory multicore architecture: a survey (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Akil, Mohamed

    2017-05-01

    The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.

  17. Porting Gravitational Wave Signal Extraction to Parallel Virtual Machine (PVM)

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar; Thompson, David E.; Redmon, Jeffery

    2009-01-01

    Laser Interferometer Space Antenna (LISA) is a planned NASA-ESA mission to be launched around 2012. The Gravitational Wave detection is fundamentally the determination of frequency, source parameters, and waveform amplitude derived in a specific order from the interferometric time-series of the rotating LISA spacecrafts. The LISA Science Team has developed a Mock LISA Data Challenge intended to promote the testing of complicated nested search algorithms to detect the 100-1 millihertz frequency signals at amplitudes of 10E-21. However, it has become clear that, sequential search of the parameters is very time consuming and ultra-sensitive; hence, a new strategy has been developed. Parallelization of existing sequential search algorithms of Gravitational Wave signal identification consists of decomposing sequential search loops, beginning with outermost loops and working inward. In this process, the main challenge is to detect interdependencies among loops and partitioning the loops so as to preserve concurrency. Existing parallel programs are based upon either shared memory or distributed memory paradigms. In PVM, master and node programs are used to execute parallelization and process spawning. The PVM can handle process management and process addressing schemes using a virtual machine configuration. The task scheduling and the messaging and signaling can be implemented efficiently for the LISA Gravitational Wave search process using a master and 6 nodes. This approach is accomplished using a server that is available at NASA Ames Research Center, and has been dedicated to the LISA Data Challenge Competition. Historically, gravitational wave and source identification parameters have taken around 7 days in this dedicated single thread Linux based server. Using PVM approach, the parameter extraction problem can be reduced to within a day. The low frequency computation and a proxy signal-to-noise ratio are calculated in separate nodes that are controlled by the master using message and vector of data passing. The message passing among nodes follows a pattern of synchronous and asynchronous send-and-receive protocols. The communication model and the message buffers are allocated dynamically to address rapid search of gravitational wave source information in the Mock LISA data sets.

  18. Viriato: a Fourier-Hermite spectral code for strongly magnetised fluid-kinetic plasma dynamics

    NASA Astrophysics Data System (ADS)

    Loureiro, Nuno; Dorland, William; Fazendeiro, Luis; Kanekar, Anjor; Mallet, Alfred; Zocco, Alessandro

    2015-11-01

    We report on the algorithms and numerical methods used in Viriato, a novel fluid-kinetic code that solves two distinct sets of equations: (i) the Kinetic Reduced Electron Heating Model equations [Zocco & Schekochihin, 2011] and (ii) the kinetic reduced MHD (KRMHD) equations [Schekochihin et al., 2009]. Two main applications of these equations are magnetised (Alfvnénic) plasma turbulence and magnetic reconnection. Viriato uses operator splitting to separate the dynamics parallel and perpendicular to the ambient magnetic field (assumed strong). Along the magnetic field, Viriato allows for either a second-order accurate MacCormack method or, for higher accuracy, a spectral-like scheme. Perpendicular to the field Viriato is pseudo-spectral, and the time integration is performed by means of an iterative predictor-corrector scheme. In addition, a distinctive feature of Viriato is its spectral representation of the parallel velocity-space dependence, achieved by means of a Hermite representation of the perturbed distribution function. A series of linear and nonlinear benchmarks and tests are presented, with focus on 3D decaying kinetic turbulence. Work partially supported by Fundação para a Ciência e Tecnologia via Grants UID/FIS/50010/2013 and IF/00530/2013.

  19. A model for cytoplasmic rheology consistent with magnetic twisting cytometry.

    PubMed

    Butler, J P; Kelly, S M

    1998-01-01

    Magnetic twisting cytometry is gaining wide applicability as a tool for the investigation of the rheological properties of cells and the mechanical properties of receptor-cytoskeletal interactions. Current technology involves the application and release of magnetically induced torques on small magnetic particles bound to or inside cells, with measurements of the resulting angular rotation of the particles. The properties of purely elastic or purely viscous materials can be determined by the angular strain and strain rate, respectively. However, the cytoskeleton and its linkage to cell surface receptors display elastic, viscous, and even plastic deformation, and the simultaneous characterization of these properties using only elastic or viscous models is internally inconsistent. Data interpretation is complicated by the fact that in current technology, the applied torques are not constant in time, but decrease as the particles rotate. This paper describes an internally consistent model consisting of a parallel viscoelastic element in series with a parallel viscoelastic element, and one approach to quantitative parameter evaluation. The unified model reproduces all essential features seen in data obtained from a wide variety of cell populations, and contains the pure elastic, viscoelastic, and viscous cases as subsets.

  20. Multiple-image encryption based on double random phase encoding and compressive sensing by using a measurement array preprocessed with orthogonal-basis matrices

    NASA Astrophysics Data System (ADS)

    Zhang, Luozhi; Zhou, Yuanyuan; Huo, Dongming; Li, Jinxi; Zhou, Xin

    2018-09-01

    A method is presented for multiple-image encryption by using the combination of orthogonal encoding and compressive sensing based on double random phase encoding. As an original thought in optical encryption, it is demonstrated theoretically and carried out by using the orthogonal-basis matrices to build a modified measurement array, being projected onto the images. In this method, all the images can be compressed in parallel into a stochastic signal and be diffused to be a stationary white noise. Meanwhile, each single-image can be separately reestablished by adopting a proper decryption key combination through the block-reconstruction rather than the entire-rebuilt, for its costs of data and decryption time are greatly decreased, which may be promising both in multi-user multiplexing and huge-image encryption/decryption. Besides, the security of this method is characterized by using the bit-length of key, and the parallelism is investigated as well. The simulations and discussions are also made on the effects of decryption as well as the correlation coefficient by using a series of sampling rates, occlusion attacks, keys with various error rates, etc.

  1. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    PubMed

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  2. The characteristics and limitations of the MPS/MMS battery charging system

    NASA Technical Reports Server (NTRS)

    Ford, F. E.; Palandati, C. F.; Davis, J. F.; Tasevoli, C. M.

    1980-01-01

    A series of tests was conducted on two 12 ampere hour nickel cadmium batteries under a simulated cycle regime using the multiple voltage versus temperature levels designed into the modular power system (MPS). These tests included: battery recharge as a function of voltage control level; temperature imbalance between two parallel batteries; a shorted or partially shorted cell in one of the two parallel batteries; impedance imbalance of one of the parallel battery circuits; and disabling and enabling one of the batteries from the bus at various charge and discharge states. The results demonstrate that the eight commandable voltage versus temperature levels designed into the MPS provide a very flexible system that not only can accommodate a wide range of normal power system operation, but also provides a high degree of flexibility in responding to abnormal operating conditions.

  3. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    NASA Astrophysics Data System (ADS)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  4. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    PubMed

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  5. Single phase four pole/six pole motor

    DOEpatents

    Kirschbaum, Herbert S.

    1984-01-01

    A single phase alternating current electric motor is provided with a main stator winding having two coil groups each including the series connection of three coils. These coil groups can be connected in series for six pole operation and in parallel for four pole operation. The coils are approximately equally spaced around the periphery of the machine but are not of equal numbers of turns. The two coil groups are identically wound and spaced 180 mechanical degrees apart. One coil of each group has more turns and a greater span than the other two coils.

  6. Optimum allocation of redundancy among subsystems connected in series. Ph.D. Thesis - Case Western Reserve Univ., Sep. 1970

    NASA Technical Reports Server (NTRS)

    Bien, D. D.

    1973-01-01

    This analysis considers the optimum allocation of redundancy in a system of serially connected subsystems in which each subsystem is of the k-out-of-n type. Redundancy is optimally allocated when: (1) reliability is maximized for given costs; or (2) costs are minimized for given reliability. Several techniques are presented for achieving optimum allocation and their relative merits are discussed. Approximate solutions in closed form were attainable only for the special case of series-parallel systems and the efficacy of these approximations is discussed.

  7. VisIO: enabling interactive visualization of ultra-scale, time-series data via high-bandwidth distributed I/O systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Christopher J; Ahrens, James P; Wang, Jun

    2010-10-15

    Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visuaUzation of this time series is a desired step before starting a new run. The I/O subsystem and associated network often are a significant impediment to interactive visualization of time-varying data; as they are not configured or provisioned to provide necessary I/O read rates. In this paper, we propose a new I/O library for visualization applications: VisIO. Visualization applications commonly use N-to-N reads within their parallel enabled readers which provides an incentive for a shared-nothing approach to I/O, similar tomore » other data-intensive approaches such as Hadoop. However, unlike other data-intensive applications, visualization requires: (1) interactive performance for large data volumes, (2) compatibility with MPI and POSIX file system semantics for compatibility with existing infrastructure, and (3) use of existing file formats and their stipulated data partitioning rules. VisIO, provides a mechanism for using a non-POSIX distributed file system to provide linear scaling of 110 bandwidth. In addition, we introduce a novel scheduling algorithm that helps to co-locate visualization processes on nodes with the requested data. Testing using VisIO integrated into Para View was conducted using the Hadoop Distributed File System (HDFS) on TACC's Longhorn cluster. A representative dataset, VPIC, across 128 nodes showed a 64.4% read performance improvement compared to the provided Lustre installation. Also tested, was a dataset representing a global ocean salinity simulation that showed a 51.4% improvement in read performance over Lustre when using our VisIO system. VisIO, provides powerful high-performance I/O services to visualization applications, allowing for interactive performance with ultra-scale, time-series data.« less

  8. Patterns of hydrogen bonding involving thiourea in the series of thioureaṡtrans-1,2-bispyridyl ethylene cocrystals - A comparative study

    NASA Astrophysics Data System (ADS)

    Kole, Goutam Kumar; Kumar, Mukesh

    2018-07-01

    Thiourea is known to act as a template to preorganise a series of trans-1,2-bispyridyl ethylenes (bpe), where the thiourea molecules present in an infinite zigzag chain with R22(8) graph set (the β-tape) which offers three different types of hydrogen bonding [J. Am. Chem. Soc. 132 (2010) 13434]. This article reports a new cocrystal of thiourea with 3,4‧-bpe and acts as a 'missing link' in the series. In this cocrystal, thiourea present in an infinite corrugated chain with R21(6) graph set, a rarely observed thiourea synthon i.e. α-tape. A comparative study has been discussed which demonstrates various types of hydrogen bonding that exist in the series and their impact on the parallel stacking of the pyridyl based olefins.

  9. Putting corannulene in its place. Reactivity studies comparing corannulene with other aromatic hydrocarbons.

    PubMed

    George, Stephen R D; Frith, Thomas D H; Thomas, Donald S; Harper, Jason B

    2015-09-14

    A series of aromatic hydrocarbons were investigated so as to compare the reactivity of corannulene with planar aromatic hydrocarbons. Corannulene was found to be more reactive than benzene, naphthalene and triphenylene to Friedel-Crafts acylation whilst electrophilic aromatic bromination was also used to confirm that triphenylene was less reactive than corannulene and that pyrene, perylene and acenaphthene were more so. The stabilisation of a neighbouring carbocation by the various aromatic systems was investigated through consideration of the rates of methanolysis of a series of benzylic alcohols. The reactivity series was found to parallel that observed for the electrophilic aromatic substitutions and both series are supported by computational studies. As such, a reactivity scale was devised that showed that corannulene was less reactive than would be expected for an aromatic planar species of similar pi electron count.

  10. MULTI-SPACECRAFT OBSERVATIONS AND TRANSPORT MODELING OF ENERGETIC ELECTRONS FOR A SERIES OF SOLAR PARTICLE EVENTS IN AUGUST 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dröge, W.; Kartavykh, Y. Y.; Dresing, N.

    During 2010 August a series of solar particle events was observed by the two STEREO spacecraft as well as near-Earth spacecraft. The events, occurring on August 7, 14, and 18, originated from active regions 11093 and 11099. We combine in situ and remote-sensing observations with predictions from our model of three-dimensional anisotropic particle propagation in order to investigate the physical processes that caused the large angular spreads of energetic electrons during these events. In particular, we address the effects of the lateral transport of the electrons in the solar corona that is due to diffusion perpendicular to the average magneticmore » field in the interplanetary medium. We also study the influence of two coronal mass ejections and associated shock waves on the electron propagation, and a possible time variation of the transport conditions during the above period. For the August 18 event we also utilize electron observations from the MESSENGER spacecraft at a distance of 0.31 au from the Sun for an attempt to separate between radial and longitudinal dependencies in the transport process. Our modelings show that the parallel and perpendicular diffusion mean free paths of electrons can vary significantly not only as a function of the radial distance, but also of the heliospheric longitude. Normalized to a distance of 1 au, we derive values of λ {sub ∥} in the range of 0.15–0.6 au, and values of λ {sub ⊥} in the range of 0.005–0.01 au. We discuss how our results relate to various theoretical models for perpendicular diffusion, and whether there might be a functional relationship between the perpendicular and the parallel mean free path.« less

  11. Multi-spacecraft Observations and Transport Modeling of Energetic Electrons for a Series of Solar Particle Events in August 2010

    NASA Astrophysics Data System (ADS)

    Dröge, W.; Kartavykh, Y. Y.; Dresing, N.; Klassen, A.

    2016-08-01

    During 2010 August a series of solar particle events was observed by the two STEREO spacecraft as well as near-Earth spacecraft. The events, occurring on August 7, 14, and 18, originated from active regions 11093 and 11099. We combine in situ and remote-sensing observations with predictions from our model of three-dimensional anisotropic particle propagation in order to investigate the physical processes that caused the large angular spreads of energetic electrons during these events. In particular, we address the effects of the lateral transport of the electrons in the solar corona that is due to diffusion perpendicular to the average magnetic field in the interplanetary medium. We also study the influence of two coronal mass ejections and associated shock waves on the electron propagation, and a possible time variation of the transport conditions during the above period. For the August 18 event we also utilize electron observations from the MESSENGER spacecraft at a distance of 0.31 au from the Sun for an attempt to separate between radial and longitudinal dependencies in the transport process. Our modelings show that the parallel and perpendicular diffusion mean free paths of electrons can vary significantly not only as a function of the radial distance, but also of the heliospheric longitude. Normalized to a distance of 1 au, we derive values of λ ∥ in the range of 0.15-0.6 au, and values of λ ⊥ in the range of 0.005-0.01 au. We discuss how our results relate to various theoretical models for perpendicular diffusion, and whether there might be a functional relationship between the perpendicular and the parallel mean free path.

  12. CCD sensors in synchrotron X-ray detectors

    NASA Astrophysics Data System (ADS)

    Strauss, M. G.; Naday, I.; Sherman, I. S.; Kraimer, M. R.; Westbrook, E. M.; Zaluzec, N. J.

    1988-04-01

    The intense photon flux from advanced synchrotron light sources, such as the 7-GeV synchrotron being designed at Argonne, require integrating-type detectors. Charge-coupled devices (CCDs) are well suited as synchrotron X-ray detectors. When irradiated indirectly via a phosphor followed by reducing optics, diffraction patterns of 100 cm 2 can be imaged on a 2 cm 2 CCD. With a conversion efficiency of ˜ 1 CCD electron/X-ray photon, a peak saturation capacity of > 10 6 X-rays can be obtained. A programmable CCD controller operating at a clock frequency of 20 MHz has been developed. The readout rate is 5 × 10 6 pixels/s and the shift rate in the parallel registers is 10 6 lines/s. The test detector was evaluated in two experiments. In protein crystallography diffraction patterns have been obtained from a lysozyme crystal using a conventional rotating anode X-ray generator. Based on these results we expect to obtain at a synchrotron diffraction images at a rate of ˜ 1 frame/s or a complete 3-dimensional data set from a single crystal in ˜ 2 min. In electron energy-loss spectroscopy (EELS), the CCD was used in a parallel detection mode which is similar to the mode array detectors are used in dispersive EXAFS. With a beam current corresponding to 3 × 10 9 electron/s on the detector, a series of 64 spectra were recorded on the CCD in a continuous sequence without interruption due to readout. The frame-to-frame pixel signal fluctuations had σ = 0.4% from which DQE = 0.4 was obtained, where the detector conversion efficiency was 2.6 CCD electrons/X-ray photon. These multiple frame series also showed the time-resolved modulation of the electron microscope optics by stray magnetic fields.

  13. Why good projects fail anyway.

    PubMed

    Matta, Nadim F; Ashkenas, Ronald N

    2003-09-01

    Big projects fail at an astonishing rate--more than half the time, by some estimates. It's not hard to understand why. Complicated long-term projects are customarily developed by a series of teams working along parallel tracks. If managers fail to anticipate everything that might fall through the cracks, those tracks will not converge successfully at the end to reach the goal. Take a companywide CRM project. Traditionally, one team might analyze customers, another select the software, a third develop training programs, and so forth. When the project's finally complete, though, it may turn out that the salespeople won't enter in the requisite data because they don't understand why they need to. This very problem has, in fact, derailed many CRM programs at major organizations. There is a way to uncover unanticipated problems while the project is still in development. The key is to inject into the overall plan a series of miniprojects, or "rapid-results initiatives," which each have as their goal a miniature version of the overall goal. In the CRM project, a single team might be charged with increasing the revenues of one sales group in one region by 25% within four months. To reach that goal, team members would have to draw on the work of all the parallel teams. But in just four months, they would discover the salespeople's resistance and probably other unforeseen issues, such as, perhaps, the need to divvy up commissions for joint-selling efforts. The World Bank has used rapid-results initiatives to great effect to keep a sweeping 16-year project on track and deliver visible results years ahead of schedule. In taking an in-depth look at this project, and others, the authors show why this approach is so effective and how the initiatives are managed in conjunction with more traditional project activities.

  14. Lung assist devices influence cardio-energetic parameters: Numerical simulation study.

    PubMed

    De Lazzari, C; Quatember, B; Recheis, W; Mayr, M; Demertzis, S; Allasia, G; De Rossi, A; Cavoretto, R; Venturino, E; Genuini, I

    2015-08-01

    We aim at an analysis of the effects mechanical ventilators (MVs) and thoracic artificial lungs (TALs) will have on the cardiovascular system, especially on important quantities, such as left and right ventricular external work (EW), pressure-volume area (PVA) and cardiac mechanical efficiency (CME). Our analyses are based on simulation studies which were carried out by using our CARDIOSIM(©) software simulator. At first, we carried out simulation studies of patients undergoing mechanical ventilation (MV) without a thoracic artificial lung (TAL). Subsequently, we conducted simulation studies of patients who had been provided with a TAL, but did not undergo MV. We aimed at describing the patient's physiological characteristics and their variations with time, such as EW, PVA, CME, cardiac output (CO) and mean pulmonary arterial/venous pressure (PAP/PVP). We were starting with a simulation run under well-defined initial conditions which was followed by simulation runs for a wide range of mean intrathoracic pressure settings. Our simulations of MV without TAL showed that for mean intrathoracic pressure settings from negative (-4 mmHg) to positive (+5 mmHg) values, the left and right ventricular EW and PVA, right ventricular CME and CO decreased, whereas left ventricular CME and the PAP increased. The simulation studies of patients with a TAL, comprised all the usual TAL arrangements, viz. configurations "in series" and in parallel with the natural lung and, moreover, hybrid configurations. The main objective of the simulation studies was, as before, the assessment of the hemodynamic response to the application of a TAL. We could for instance show that, in case of an "in series" configuration, a reduction (an increase) in left (right) ventricular EW and PVA values occurred, whereas the best performance in terms of CO can be achieved in the case of an in parallel configuration.

  15. A PC parallel port button box provides millisecond response time accuracy under Linux.

    PubMed

    Stewart, Neil

    2006-02-01

    For psychologists, it is sometimes necessary to measure people's reaction times to the nearest millisecond. This article describes how to use the PC parallel port to receive signals from a button box to achieve millisecond response time accuracy. The workings of the parallel port, the corresponding port addresses, and a simple Linux program for controlling the port are described. A test of the speed and reliability of button box signal detection is reported. If the reader is moderately familiar with Linux, this article should provide sufficient instruction for him or her to build and test his or her own parallel port button box. This article also describes how the parallel port could be used to control an external apparatus.

  16. Spring Break: A Lesson in Circuits. "This Old House" College Style.

    ERIC Educational Resources Information Center

    Duch, Barbara

    2001-01-01

    Introduces students to the topics of electricity and circuits within the context of house wiring. Explores the properties of series and parallel circuits, researches local wiring codes, calculates the current used by appliances based on their power ratings, and designs circuits in a typical kitchen. (Author/ASK)

  17. Lumped transmission line avalanche pulser

    DOEpatents

    Booth, R.

    1995-07-18

    A lumped linear avalanche transistor pulse generator utilizes stacked transistors in parallel within a stage and couples a plurality of said stages, in series with increasing zener diode limited voltages per stage and decreasing balanced capacitance load per stage to yield a high voltage, high and constant current, very short pulse. 8 figs.

  18. LOCAL AND GLOBAL DYNAMICS OF POLYLACTIDES. (R826733)

    EPA Science Inventory

    Polylactides (PLAs) are a family of degradable plastics having a component of the dipole moment both perpendicular and parallel to the polymer backbone (i.e. is a type-A polymer). We have studied the sub-glass, segmental and global chain dynamics in a series of fully amorphous...

  19. Usable Electricity from the Sun.

    ERIC Educational Resources Information Center

    Energy Research and Development Administration, Washington, DC. Div. of Solar Energy.

    This brochure gives an overview to solar photovoltaic energy production. Some of the topics discussed are: (1) solar cell construction; (2) parallel and series cell arrays; (3) effects of location on solar cell array performance; (4) solar economics; (5) space aplications of solar photovoltaic power; and (6) terrestrial applications of solar…

  20. Coaching in the AP Classroom

    ERIC Educational Resources Information Center

    Fornaciari, Jim

    2013-01-01

    Many parallels exist between quality coaches and quality classroom teachers--especially AP teachers, who often feel the pressure to produce positive test results. Having developed a series of techniques and strategies for building a team-oriented winning culture on the field, Jim Fornaciari writes about how he adapted those methods to work in the…

  1. Application of an Elastic-Plastic Methodology to Structural Integrity Evaluation,

    DTIC Science & Technology

    The elastic plastic fracture mechanics ( EPFM ) technology has advanced to the point where it can be used to make a realistic assessment of the...concepts of EPFM into a structural stability evaluation. The structure is modeled as a cracked test specimen either in series or parallel with a spring

  2. Conductor and Ensemble Performance Expressivity and State Festival Ratings

    ERIC Educational Resources Information Center

    Price, Harry E.; Chang, E. Christina

    2005-01-01

    This study is the second in a series examining the relationship between conducting and ensemble performance. The purpose was to further examine the associations among conductor, ensemble performance expressivity, and festival ratings. Participants were asked to rate the expressivity of video-only conducting and parallel audio-only excerpts from a…

  3. Practical Application of Fundamental Concepts in Exercise Physiology

    ERIC Educational Resources Information Center

    Ramsbottom R.; Kinch, R. F. T.; Morris, M. G.; Dennis, A. M.

    2007-01-01

    The collection of primary data in laboratory classes enhances undergraduate practical and critical thinking skills. The present article describes the use of a lecture program, running in parallel with a series of linked practical classes, that emphasizes classical or standard concepts in exercise physiology. The academic and practical program ran…

  4. Understanding the Behaviour of Infinite Ladder Circuits

    ERIC Educational Resources Information Center

    Ucak, C.; Yegin, K.

    2008-01-01

    Infinite ladder circuits are often encountered in undergraduate electrical engineering and physics curricula when dealing with series and parallel combination of impedances, as a part of filter design or wave propagation on transmission lines. The input impedance of such infinite ladder circuits is derived by assuming that the input impedance does…

  5. Lumped transmission line avalanche pulser

    DOEpatents

    Booth, Rex

    1995-01-01

    A lumped linear avalanche transistor pulse generator utilizes stacked transistors in parallel within a stage and couples a plurality of said stages, in series with increasing zener diode limited voltages per stage and decreasing balanced capacitance load per stage to yield a high voltage, high and constant current, very short pulse.

  6. Parallel Processing of Broad-Band PPM Signals

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement

    2010-01-01

    A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).

  7. Effects of hurricanes and climate oscillations on annual variation in reproduction in wet forest, Puerto Rico.

    PubMed

    Zimmerman, Jess K; Hogan, James Aaron; Nytch, Christopher J; Bithorn, John E

    2018-06-01

    Interannual changes in global climate and weather disturbances may influence reproduction in tropical forests. Phenomena such as the El Niño Southern Oscillation (ENSO) are known to produce interannual variation in reproduction, as do severe storms such as hurricanes. Using stationary trap-based phenology data collected fortnightly from 1993 to 2014 from a hurricane-affected (1989 Hugo, 1998 Georges) subtropical wet forest in northeastern Puerto Rico, we conducted a time series analysis of flowering and seed production. We addressed (1) the degree to which interannual variation in flower and seed production was influenced by global climate drivers and time since hurricane disturbance, and (2) how long-term trends in reproduction varied with plant lifeform. The seasonally de-trended number of species in flower fluctuated over time while the number of species producing seed exhibited a declining trend, one that was particularly evident during the second half of the study period. Lagged El Niño indices and time series hurricane disturbance jointly influenced the trends in numbers of flowering and fruiting species, suggesting complex global influences on tropical forest reproduction with variable periodicities. Lag times affecting flowering tended to be longer than those affecting fruiting. Long-term patterns of reproduction in individual lifeforms paralleled the community-wide patterns, with most groups of lifeform exhibiting a long-term decline in seed but not flower production. Exceptions were found for hemiepiphytes, small trees, and lianas whose seed reproduction increased and then declined over time. There was no long-term increase in flower production as reported in other Neotropical sites. © 2018 by the Ecological Society of America.

  8. Research on control law accelerator of digital signal process chip TMS320F28035 for real-time data acquisition and processing

    NASA Astrophysics Data System (ADS)

    Zhao, Shuangle; Zhang, Xueyi; Sun, Shengli; Wang, Xudong

    2017-08-01

    TI C2000 series digital signal process (DSP) chip has been widely used in electrical engineering, measurement and control, communications and other professional fields, DSP TMS320F28035 is one of the most representative of a kind. When using the DSP program, need data acquisition and data processing, and if the use of common mode C or assembly language programming, the program sequence, analogue-to-digital (AD) converter cannot be real-time acquisition, often missing a lot of data. The control low accelerator (CLA) processor can run in parallel with the main central processing unit (CPU), and the frequency is consistent with the main CPU, and has the function of floating point operations. Therefore, the CLA coprocessor is used in the program, and the CLA kernel is responsible for data processing. The main CPU is responsible for the AD conversion. The advantage of this method is to reduce the time of data processing and realize the real-time performance of data acquisition.

  9. Synchronization Of Parallel Discrete Event Simulations

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  10. Multigrid methods with space–time concurrency

    DOE PAGES

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...

    2017-10-06

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  11. Multigrid methods with space–time concurrency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  12. fastBMA: scalable network inference and transitive reduction.

    PubMed

    Hung, Ling-Hong; Shi, Kaiyuan; Wu, Migao; Young, William Chad; Raftery, Adrian E; Yeung, Ka Yee

    2017-10-01

    Inferring genetic networks from genome-wide expression data is extremely demanding computationally. We have developed fastBMA, a distributed, parallel, and scalable implementation of Bayesian model averaging (BMA) for this purpose. fastBMA also includes a computationally efficient module for eliminating redundant indirect edges in the network by mapping the transitive reduction to an easily solved shortest-path problem. We evaluated the performance of fastBMA on synthetic data and experimental genome-wide time series yeast and human datasets. When using a single CPU core, fastBMA is up to 100 times faster than the next fastest method, LASSO, with increased accuracy. It is a memory-efficient, parallel, and distributed application that scales to human genome-wide expression data. A 10 000-gene regulation network can be obtained in a matter of hours using a 32-core cloud cluster (2 nodes of 16 cores). fastBMA is a significant improvement over its predecessor ScanBMA. It is more accurate and orders of magnitude faster than other fast network inference methods such as the 1 based on LASSO. The improved scalability allows it to calculate networks from genome scale data in a reasonable time frame. The transitive reduction method can improve accuracy in denser networks. fastBMA is available as code (M.I.T. license) from GitHub (https://github.com/lhhunghimself/fastBMA), as part of the updated networkBMA Bioconductor package (https://www.bioconductor.org/packages/release/bioc/html/networkBMA.html) and as ready-to-deploy Docker images (https://hub.docker.com/r/biodepot/fastbma/). © The Authors 2017. Published by Oxford University Press.

  13. Analysis of Parallel Burn Without Crossfeed TSTO RLV Architectures and Comparison to Parallel Burn With Crossfeed and Series Burn Architectures

    NASA Technical Reports Server (NTRS)

    Smith, Garrett; Phillips, Alan

    2002-01-01

    There are currently three dominant TSTO class architectures. These are Series Burn (SB), Parallel Burn with crossfeed (PBw/cf), and Parallel Burn without crossfeed (PBncf). The goal of this study was to determine what factors uniquely affect PBncf architectures, how each of these factors interact, and to determine from a performance perspective whether a PBncf vehicle could be competitive with a PBw/cf or SB vehicle using equivalent technology and assumptions. In all cases, performance was evaluated on a relative basis for a fixed payload and mission by comparing gross and dry vehicle masses of a closed vehicle. Propellant combinations studied were LOX: LH2 propelled orbiter and booster (HH) and LOX: Kerosene booster with LOX: LH2 orbiter (KH). The study conclusions were: 1) a PBncf orbiter should be throttled as deeply as possible after launch until the staging point. 2) a detailed structural model is essential to accurate architecture analysis and evaluation. 3) a PBncf TSTO architecture is feasible for systems that stage at mach 7. 3a) HH architectures can achieve a mass growth relative to PBw/cf of < 20%. 3b) KH architectures can achieve a mass growth relative to Series Burn of < 20%. 4) center of gravity (CG) control will be a major issue for a PBncf vehicle, due to the low orbiter specific thrust to weight ratio and to the position of the orbiter required to align the nozzle heights at liftoff. 5 ) thrust to weight ratios of 1.3 at liftoff and between 1.0 and 0.9 when staging at mach 7 appear to be close to ideal for PBncf vehicles. 6) performance for all vehicles studied is better when staged at mach 7 instead of mach 5. The study showed that a Series Burn architecture has the lowest gross mass for HH cases, and has the lowest dry mass for KH cases. The potential disadvantages of SB are the required use of an air-start for the orbiter engines and potential CG control issues. A Parallel Burn with crossfeed architecture solves both these problems, but the mechanics of a large bipropellant crossfeed system pose significant technical difficulties. Parallel Burn without crossfeed vehicles start both booster and orbiter engines on the ground and thus avoid both the risk of orbiter air-start and the complexity of a crossfeed system. The drawback is that the orbiter must use 20% to 35% of its propellant before reaching the staging point. This induces a weight penalty in the orbiter in order to carry additional propellant, which causes a further weight penalty in the booster to achieve the same staging point. One way to reduce the orbiter propellant consumption during the first stage is to throttle down the orbiter engines as much as possible. Another possibility is to use smaller or fewer engines. Throttling the orbiter engines soon after liftoff minimizes CG control problems due to a low orbiter liftoff thrust, but may result in an unnecessarily high orbiter thrust after staging. Reducing the number or size of engines size may cause CG control problems and drift at launch. The study suggested possible methods to maximize performance of PBncf vehicle architectures in order to meet mission design requirements.

  14. Mineral and organic matrix interaction in normally calcifying tendon visualized in three dimensions by high-voltage electron microscopic tomography and graphic image reconstruction

    NASA Technical Reports Server (NTRS)

    Landis, W. J.; Song, M. J.; Leith, A.; McEwen, L.; McEwen, B. F.

    1993-01-01

    To define the ultrastructural accommodation of mineral crystals by collagen fibrils and other organic matrix components during vertebrate calcification, electron microscopic 3-D reconstructions were generated from the normally mineralizing leg tendons from the domestic turkey, Meleagris gallopavo. Embedded specimens containing initial collagen mineralizing sites were cut into 0.5-micron-thick sections and viewed and photographed at 1.0 MV in the Albany AEI-EM7 high-voltage electron microscope. Tomographic 3-D reconstructions were computed from a 2 degree tilt series of micrographs taken over a minimum angular range of +/- 60 degrees. Reconstructions of longitudinal tendon profiles confirm the presence of irregularly shaped mineral platelets, whose crystallographic c-axes are oriented generally parallel to one another and directed along the collagen long axes. The reconstructions also corroborate observations of a variable crystal length (up to 170 nm measured along crystallographic c-axes), the presence of crystals initially in either the hole or overlap zones of collagen, and crystal growth in the c-axis direction beyond these zones into adjacent overlap and other hole regions. Tomography shows for the first time that crystal width varies (30-45 nm) but crystal thickness is uniform (approximately 4-6 nm at the resolution limit of tomography); more crystals are located in the collagen hole zones than in the overlap regions at the earliest stages of tendon mineralization; the crystallographic c-axes of the platelets lie within +/- 15-20 degrees of one another rather than being perfectly parallel; adjacent platelets are spatially separated by a minimum of 4.2 +/- 1.0 nm; crystals apparently fuse in coplanar alignment to form larger platelets; development of crystals in width occurs to dimensions beyond single collagen hole zones; and a thin envelope of organic origin may be present along or just beneath the surfaces of individual mineral platelets. Implicit in the results is that the formation of crystals occurs at different sites and times by independent nucleation events in local regions of collagen. These data provide the first direct visual evidence from 3-D imaging describing the size, shape, orientation, and growth of mineral crystals in association with collagen of a normally mineralizing vertebrate tissue. They support concepts that c-axial crystal growth is unhindered by collage hole zone dimensions, that crystals are organized in the tendon in a series of generally parallel platelets, and that crystal growth in width across collagen fibrils may follow channels or grooves formed by adjacent hole zones in register.

  15. Parallel implementation of all-digital timing recovery for high-speed and real-time optical coherent receivers.

    PubMed

    Zhou, Xian; Chen, Xue

    2011-05-09

    The digital coherent receivers combine coherent detection with digital signal processing (DSP) to compensate for transmission impairments, and therefore are a promising candidate for future high-speed optical transmission system. However, the maximum symbol rate supported by such real-time receivers is limited by the processing rate of hardware. In order to cope with this difficulty, the parallel processing algorithms is imperative. In this paper, we propose a novel parallel digital timing recovery loop (PDTRL) based on our previous work. Furthermore, for increasing the dynamic dispersion tolerance range of receivers, we embed a parallel adaptive equalizer in the PDTRL. This parallel joint scheme (PJS) can be used to complete synchronization, equalization and polarization de-multiplexing simultaneously. Finally, we demonstrate that PDTRL and PJS allow the hardware to process 112 Gbit/s POLMUX-DQPSK signal at the hundreds MHz range. © 2011 Optical Society of America

  16. Parallel computations and control of adaptive structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, Kenneth F.; Belvin, W. Keith; Chong, K. P. (Editor); Liu, S. C. (Editor); Li, J. C. (Editor)

    1991-01-01

    The equations of motion for structures with adaptive elements for vibration control are presented for parallel computations to be used as a software package for real-time control of flexible space structures. A brief introduction of the state-of-the-art parallel computational capability is also presented. Time marching strategies are developed for an effective use of massive parallel mapping, partitioning, and the necessary arithmetic operations. An example is offered for the simulation of control-structure interaction on a parallel computer and the impact of the approach presented for applications in other disciplines than aerospace industry is assessed.

  17. Extending molecular simulation time scales: Parallel in time integrations for high-level quantum chemistry and complex force representations

    NASA Astrophysics Data System (ADS)

    Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.

    2013-08-01

    Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time ti (trajectory positions and velocities xi = (ri, vi)) to time ti + 1 (xi + 1) by xi + 1 = fi(xi), the dynamics problem spanning an interval from t0…tM can be transformed into a root finding problem, F(X) = [xi - f(x(i - 1)]i = 1, M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H2O AIMD simulation at the MP2 level. The maximum speedup (serial execution time/parallel execution time) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H2O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.

  18. Externally calibrated parallel imaging for 3D multispectral imaging near metallic implants using broadband ultrashort echo time imaging.

    PubMed

    Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B

    2017-06-01

    To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  19. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  20. Multi-leg heat pipe evaporator

    NASA Technical Reports Server (NTRS)

    Alario, J. P.; Haslett, R. A. (Inventor)

    1986-01-01

    A multileg heat pipe evaporator facilitates the use and application of a monogroove heat pipe by providing an evaporation section which is compact in area and structurally more compatible with certain heat exchangers or heat input apparatus. The evaporation section of a monogroove heat pipe is formed by a series of parallel legs having a liquid and a vapor channel and a communicating capillary slot therebetween. The liquid and vapor channels and interconnecting capillary slots of the evaporating section are connected to the condensing section of the heat pipe by a manifold connecting liquid and vapor channels of the parallel evaporation section legs with the corresponding liquid and vapor channels of the condensing section.

  1. Towards a HPC-oriented parallel implementation of a learning algorithm for bioinformatics applications

    PubMed Central

    2014-01-01

    Background The huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U-BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n3) and of O(n5) order, respectively, and so, the algorithm is unaffordable for huge data sets. Results We find mathematical and programming solutions able to lead us towards the implementation of the algorithm U-BRAIN on parallel computers. First we give a Dynamic Programming model of the U-BRAIN algorithm, then we minimize the representation of the relevances. When the data are of great size we are forced to use the mass memory, and depending on where the data are actually stored, the access times can be quite different. According to the evaluation of algorithmic efficiency based on the Disk Model, in order to reduce the costs of the communications between different memories (RAM, Cache, Mass, Virtual) and to achieve efficient I/O performance, we design a mass storage structure able to access its data with a high degree of temporal and spatial locality. Then we develop a parallel implementation of the algorithm. We model it as a SPMD system together to a Message-Passing Programming Paradigm. Here, we adopt the high-level message-passing systems MPI (Message Passing Interface) in the version for the Java programming language, MPJ. The parallel processing is organized into four stages: partitioning, communication, agglomeration and mapping. The decomposition of the U-BRAIN algorithm determines the necessity of a communication protocol design among the processors involved. Efficient synchronization design is also discussed. Conclusions In the context of a collaboration between public and private institutions, the parallel model of U-BRAIN has been implemented and tested on the INTEL XEON E7xxx and E5xxx family of the CRESCO structure of Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), developed in the framework of the European Grid Infrastructure (EGI), a series of efforts to provide access to high-throughput computing resources across Europe using grid computing techniques. The implementation is able to minimize both the memory space and the execution time. The test data used in this study are IPDATA (Irvine Primate splice- junction DATA set), a subset of HS3D (Homo Sapiens Splice Sites Dataset) and a subset of COSMIC (the Catalogue of Somatic Mutations in Cancer). The execution time and the speed-up on IPDATA reach the best values within about 90 processors. Then the parallelization advantage is balanced by the greater cost of non-local communications between the processors. A similar behaviour is evident on HS3D, but at a greater number of processors, so evidencing the direct relationship between data size and parallelization gain. This behaviour is confirmed on COSMIC. Overall, the results obtained show that the parallel version is up to 30 times faster than the serial one. PMID:25077818

  2. Towards a HPC-oriented parallel implementation of a learning algorithm for bioinformatics applications.

    PubMed

    D'Angelo, Gianni; Rampone, Salvatore

    2014-01-01

    The huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U-BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n(3)) and of O(n(5)) order, respectively, and so, the algorithm is unaffordable for huge data sets. We find mathematical and programming solutions able to lead us towards the implementation of the algorithm U-BRAIN on parallel computers. First we give a Dynamic Programming model of the U-BRAIN algorithm, then we minimize the representation of the relevances. When the data are of great size we are forced to use the mass memory, and depending on where the data are actually stored, the access times can be quite different. According to the evaluation of algorithmic efficiency based on the Disk Model, in order to reduce the costs of the communications between different memories (RAM, Cache, Mass, Virtual) and to achieve efficient I/O performance, we design a mass storage structure able to access its data with a high degree of temporal and spatial locality. Then we develop a parallel implementation of the algorithm. We model it as a SPMD system together to a Message-Passing Programming Paradigm. Here, we adopt the high-level message-passing systems MPI (Message Passing Interface) in the version for the Java programming language, MPJ. The parallel processing is organized into four stages: partitioning, communication, agglomeration and mapping. The decomposition of the U-BRAIN algorithm determines the necessity of a communication protocol design among the processors involved. Efficient synchronization design is also discussed. In the context of a collaboration between public and private institutions, the parallel model of U-BRAIN has been implemented and tested on the INTEL XEON E7xxx and E5xxx family of the CRESCO structure of Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), developed in the framework of the European Grid Infrastructure (EGI), a series of efforts to provide access to high-throughput computing resources across Europe using grid computing techniques. The implementation is able to minimize both the memory space and the execution time. The test data used in this study are IPDATA (Irvine Primate splice- junction DATA set), a subset of HS3D (Homo Sapiens Splice Sites Dataset) and a subset of COSMIC (the Catalogue of Somatic Mutations in Cancer). The execution time and the speed-up on IPDATA reach the best values within about 90 processors. Then the parallelization advantage is balanced by the greater cost of non-local communications between the processors. A similar behaviour is evident on HS3D, but at a greater number of processors, so evidencing the direct relationship between data size and parallelization gain. This behaviour is confirmed on COSMIC. Overall, the results obtained show that the parallel version is up to 30 times faster than the serial one.

  3. High performance computing for deformable image registration: towards a new paradigm in adaptive radiotherapy.

    PubMed

    Samant, Sanjiv S; Xia, Junyi; Muyan-Ozcelik, Pinar; Owens, John D

    2008-08-01

    The advent of readily available temporal imaging or time series volumetric (4D) imaging has become an indispensable component of treatment planning and adaptive radiotherapy (ART) at many radiotherapy centers. Deformable image registration (DIR) is also used in other areas of medical imaging, including motion corrected image reconstruction. Due to long computation time, clinical applications of DIR in radiation therapy and elsewhere have been limited and consequently relegated to offline analysis. With the recent advances in hardware and software, graphics processing unit (GPU) based computing is an emerging technology for general purpose computation, including DIR, and is suitable for highly parallelized computing. However, traditional general purpose computation on the GPU is limited because the constraints of the available programming platforms. As well, compared to CPU programming, the GPU currently has reduced dedicated processor memory, which can limit the useful working data set for parallelized processing. We present an implementation of the demons algorithm using the NVIDIA 8800 GTX GPU and the new CUDA programming language. The GPU performance will be compared with single threading and multithreading CPU implementations on an Intel dual core 2.4 GHz CPU using the C programming language. CUDA provides a C-like language programming interface, and allows for direct access to the highly parallel compute units in the GPU. Comparisons for volumetric clinical lung images acquired using 4DCT were carried out. Computation time for 100 iterations in the range of 1.8-13.5 s was observed for the GPU with image size ranging from 2.0 x 10(6) to 14.2 x 10(6) pixels. The GPU registration was 55-61 times faster than the CPU for the single threading implementation, and 34-39 times faster for the multithreading implementation. For CPU based computing, the computational time generally has a linear dependence on image size for medical imaging data. Computational efficiency is characterized in terms of time per megapixels per iteration (TPMI) with units of seconds per megapixels per iteration (or spmi). For the demons algorithm, our CPU implementation yielded largely invariant values of TPMI. The mean TPMIs were 0.527 spmi and 0.335 spmi for the single threading and multithreading cases, respectively, with <2% variation over the considered image data range. For GPU computing, we achieved TPMI =0.00916 spmi with 3.7% variation, indicating optimized memory handling under CUDA. The paradigm of GPU based real-time DIR opens up a host of clinical applications for medical imaging.

  4. A coaxial-output capacitor-loaded annular pulse forming line.

    PubMed

    Li, Rui; Li, Yongdong; Su, Jiancang; Yu, Binxiong; Xu, Xiudong; Zhao, Liang; Cheng, Jie; Zeng, Bo

    2018-04-01

    A coaxial-output capacitor-loaded annular pulse forming line (PFL) is developed in order to reduce the flat top fluctuation amplitude of the forming quasi-square pulse and improve the quality of the pulse waveform produced by a Tesla-pulse forming network (PFN) type pulse generator. A single module composed of three involute dual-plate PFNs is designed, with a characteristic impedance of 2.44 Ω, an electrical length of 15 ns, and a sustaining voltage of 60 kV. The three involute dual-plate PFNs connected in parallel have the same impedance and electrical length. Due to the existed small inductance and capacitance per unit length in each involute dual-plate PFN, the upper cut-off frequency of the PFN is increased. As a result, the entire annular PFL has better high-frequency response capability. Meanwhile, the three dual-plate PFNs discharge in parallel, which is much closer to the coaxial output. The series connecting inductance between adjacent two modules is significantly reduced when the annular PFL modules are connected in series. The pulse waveform distortion is reduced when the pulse transfers along the modules. Finally, the shielding electrode structure is applied on both sides of the module. The electromagnetic field is restricted in the module when a single module discharges, and the electromagnetic coupling between the multi-stage annular PFLs is eliminated. Based on the principle of impedance matching between the multi-stage annular PFL and the coaxial PFL, the structural optimization design of a mixed PFL in a Tesla type pulse generator is completed with the transient field-circuit co-simulation method. The multi-stage annular PFL consists of 18 stage annular PFL modules in series, with the characteristic impedance of 44 Ω, the electrical length of 15 ns, and the sustaining voltage of 1 MV. The mixed PFL can generate quasi-square electrical pulses with a pulse width of 43 ns, and the fluctuation ratio of the pulse flat top is less than 8% when the pulse rise time is about 5 ns.

  5. A coaxial-output capacitor-loaded annular pulse forming line

    NASA Astrophysics Data System (ADS)

    Li, Rui; Li, Yongdong; Su, Jiancang; Yu, Binxiong; Xu, Xiudong; Zhao, Liang; Cheng, Jie; Zeng, Bo

    2018-04-01

    A coaxial-output capacitor-loaded annular pulse forming line (PFL) is developed in order to reduce the flat top fluctuation amplitude of the forming quasi-square pulse and improve the quality of the pulse waveform produced by a Tesla-pulse forming network (PFN) type pulse generator. A single module composed of three involute dual-plate PFNs is designed, with a characteristic impedance of 2.44 Ω, an electrical length of 15 ns, and a sustaining voltage of 60 kV. The three involute dual-plate PFNs connected in parallel have the same impedance and electrical length. Due to the existed small inductance and capacitance per unit length in each involute dual-plate PFN, the upper cut-off frequency of the PFN is increased. As a result, the entire annular PFL has better high-frequency response capability. Meanwhile, the three dual-plate PFNs discharge in parallel, which is much closer to the coaxial output. The series connecting inductance between adjacent two modules is significantly reduced when the annular PFL modules are connected in series. The pulse waveform distortion is reduced when the pulse transfers along the modules. Finally, the shielding electrode structure is applied on both sides of the module. The electromagnetic field is restricted in the module when a single module discharges, and the electromagnetic coupling between the multi-stage annular PFLs is eliminated. Based on the principle of impedance matching between the multi-stage annular PFL and the coaxial PFL, the structural optimization design of a mixed PFL in a Tesla type pulse generator is completed with the transient field-circuit co-simulation method. The multi-stage annular PFL consists of 18 stage annular PFL modules in series, with the characteristic impedance of 44 Ω, the electrical length of 15 ns, and the sustaining voltage of 1 MV. The mixed PFL can generate quasi-square electrical pulses with a pulse width of 43 ns, and the fluctuation ratio of the pulse flat top is less than 8% when the pulse rise time is about 5 ns.

  6. The interaction of turbulence with parallel and perpendicular shocks

    NASA Astrophysics Data System (ADS)

    Adhikari, L.; Zank, G. P.; Hunana, P.; Hu, Q.

    2016-11-01

    Interplanetary shocks exist in most astrophysical flows, and modify the properties of the background flow. We apply the Zank et al 2012 six coupled turbulence transport model equations to study the interaction of turbulence with parallel and perpendicular shock waves in the solar wind. We model the 1D structure of a stationary perpendicular or parallel shock wave using a hyperbolic tangent function and the Rankine-Hugoniot conditions. A reduced turbulence transport model (the 4-equation model) is applied to parallel and perpendicular shock waves, and solved using a 4th- order Runge Kutta method. We compare the model results with ACE spacecraft observations. We identify one quasi-parallel and one quasi-perpendicular event in the ACE spacecraft data sets, and compute various turbulent observed values such as the fluctuating magnetic and kinetic energy, the energy in forward and backward propagating modes, the total turbulent energy in the upstream and downstream of the shock. We also calculate the error associated with each turbulent observed value, and fit the observed values by a least square method and use a Fourier series fitting function. We find that the theoretical results are in reasonable agreement with observations. The energy in turbulent fluctuations is enhanced and the correlation length is approximately constant at the shock. Similarly, the normalized cross helicity increases across a perpendicular shock, and decreases across a parallel shock.

  7. Prioritizing multiple therapeutic targets in parallel using automated DNA-encoded library screening

    NASA Astrophysics Data System (ADS)

    Machutta, Carl A.; Kollmann, Christopher S.; Lind, Kenneth E.; Bai, Xiaopeng; Chan, Pan F.; Huang, Jianzhong; Ballell, Lluis; Belyanskaya, Svetlana; Besra, Gurdyal S.; Barros-Aguirre, David; Bates, Robert H.; Centrella, Paolo A.; Chang, Sandy S.; Chai, Jing; Choudhry, Anthony E.; Coffin, Aaron; Davie, Christopher P.; Deng, Hongfeng; Deng, Jianghe; Ding, Yun; Dodson, Jason W.; Fosbenner, David T.; Gao, Enoch N.; Graham, Taylor L.; Graybill, Todd L.; Ingraham, Karen; Johnson, Walter P.; King, Bryan W.; Kwiatkowski, Christopher R.; Lelièvre, Joël; Li, Yue; Liu, Xiaorong; Lu, Quinn; Lehr, Ruth; Mendoza-Losana, Alfonso; Martin, John; McCloskey, Lynn; McCormick, Patti; O'Keefe, Heather P.; O'Keeffe, Thomas; Pao, Christina; Phelps, Christopher B.; Qi, Hongwei; Rafferty, Keith; Scavello, Genaro S.; Steiginga, Matt S.; Sundersingh, Flora S.; Sweitzer, Sharon M.; Szewczuk, Lawrence M.; Taylor, Amy; Toh, May Fern; Wang, Juan; Wang, Minghui; Wilkins, Devan J.; Xia, Bing; Yao, Gang; Zhang, Jean; Zhou, Jingye; Donahue, Christine P.; Messer, Jeffrey A.; Holmes, David; Arico-Muendel, Christopher C.; Pope, Andrew J.; Gross, Jeffrey W.; Evindar, Ghotas

    2017-07-01

    The identification and prioritization of chemically tractable therapeutic targets is a significant challenge in the discovery of new medicines. We have developed a novel method that rapidly screens multiple proteins in parallel using DNA-encoded library technology (ELT). Initial efforts were focused on the efficient discovery of antibacterial leads against 119 targets from Acinetobacter baumannii and Staphylococcus aureus. The success of this effort led to the hypothesis that the relative number of ELT binders alone could be used to assess the ligandability of large sets of proteins. This concept was further explored by screening 42 targets from Mycobacterium tuberculosis. Active chemical series for six targets from our initial effort as well as three chemotypes for DHFR from M. tuberculosis are reported. The findings demonstrate that parallel ELT selections can be used to assess ligandability and highlight opportunities for successful lead and tool discovery.

  8. Oxytocin: parallel processing in the social brain?

    PubMed

    Dölen, Gül

    2015-06-01

    Early studies attempting to disentangle the network complexity of the brain exploited the accessibility of sensory receptive fields to reveal circuits made up of synapses connected both in series and in parallel. More recently, extension of this organisational principle beyond the sensory systems has been made possible by the advent of modern molecular, viral and optogenetic approaches. Here, evidence supporting parallel processing of social behaviours mediated by oxytocin is reviewed. Understanding oxytocinergic signalling from this perspective has significant implications for the design of oxytocin-based therapeutic interventions aimed at disorders such as autism, where disrupted social function is a core clinical feature. Moreover, identification of opportunities for novel technology development will require a better appreciation of the complexity of the circuit-level organisation of the social brain. © 2015 The Authors. Journal of Neuroendocrinology published by John Wiley & Sons Ltd on behalf of British Society for Neuroendocrinology.

  9. Numerical study of the stress-strain state of reinforced plate on an elastic foundation by the Bubnov-Galerkin method

    NASA Astrophysics Data System (ADS)

    Beskopylny, Alexey; Kadomtseva, Elena; Strelnikov, Grigory

    2017-10-01

    The stress-strain state of a rectangular slab resting on an elastic foundation is considered. The slab material is isotropic. The slab has stiffening ribs that directed parallel to both sides of the plate. Solving equations are obtained for determining the deflection for various mechanical and geometric characteristics of the stiffening ribs which are parallel to different sides of the plate, having different rigidity for bending and torsion. The calculation scheme assumes an orthotropic slab having different cylindrical stiffness in two mutually perpendicular directions parallel to the reinforcing ribs. An elastic foundation is adopted by Winkler model. To determine the deflection the Bubnov-Galerkin method is used. The deflection is taken in the form of an expansion in a series with unknown coefficients by special polynomials, which are a combination of Legendre polynomials.

  10. Programming a hillslope water movement model on the MPP

    NASA Technical Reports Server (NTRS)

    Devaney, J. E.; Irving, A. R.; Camillo, P. J.; Gurney, R. J.

    1987-01-01

    A physically based numerical model was developed of heat and moisture flow within a hillslope on a parallel architecture computer, as a precursor to a model of a complete catchment. Moisture flow within a catchment includes evaporation, overland flow, flow in unsaturated soil, and flow in saturated soil. Because of the empirical evidence that moisture flow in unsaturated soil is mainly in the vertical direction, flow in the unsaturated zone can be modeled as a series of one dimensional columns. This initial version of the hillslope model includes evaporation and a single column of one dimensional unsaturated zone flow. This case has already been solved on an IBM 3081 computer and is now being applied to the massively parallel processor architecture so as to make the extension to the one dimensional case easier and to check the problems and benefits of using a parallel architecture machine.

  11. Research on Parallel Three Phase PWM Converters base on RTDS

    NASA Astrophysics Data System (ADS)

    Xia, Yan; Zou, Jianxiao; Li, Kai; Liu, Jingbo; Tian, Jun

    2018-01-01

    Converters parallel operation can increase capacity of the system, but it may lead to potential zero-sequence circulating current, so the control of circulating current was an important goal in the design of parallel inverters. In this paper, the Real Time Digital Simulator (RTDS) is used to model the converters parallel system in real time and study the circulating current restraining. The equivalent model of two parallel converters and zero-sequence circulating current(ZSCC) were established and analyzed, then a strategy using variable zero vector control was proposed to suppress the circulating current. For two parallel modular converters, hardware-in-the-loop(HIL) study based on RTDS and practical experiment were implemented, results prove that the proposed control strategy is feasible and effective.

  12. Translating science into the next generation meat quality program for Australian lamb.

    PubMed

    Pethick, D W; Ball, A J; Banks, R G; Gardner, G E; Rowe, J B; Jacob, R H

    2014-02-01

    This paper introduces a series of papers in the form of a special edition that reports phenotypic analyses done in parallel with genotypic analyses for the Australian Sheep Industry Cooperative Research Centre (Sheep CRC) using data generated from the information nucleus flock (INF). This has allowed new knowledge to be gained of the genetic, environment and management factors that impact on the carcase and eating quality, visual appeal, odour and health attributes of Australian lamb meat. The research described involved close collaboration with commercial partners across the supply chain in the sire breeding as well as the meat processing industries. This approach has enabled timely delivery and adoption of research results to industry in an unprecedented way and provides a good model for future research. © 2013.

  13. FUNGIBILITY AND CONSUMER CHOICE: EVIDENCE FROM COMMODITY PRICE SHOCKS.

    PubMed

    Hastings, Justine S; Shapiro, Jesse M

    2013-11-01

    We formulate a test of the fungibility of money based on parallel shifts in the prices of different quality grades of a commodity. We embed the test in a discrete-choice model of product quality choice and estimate the model using panel microdata on gasoline purchases. We find that when gasoline prices rise consumers substitute to lower octane gasoline, to an extent that cannot be explained by income effects. Across a wide range of specifications, we consistently reject the null hypothesis that households treat "gas money" as fungible with other income. We compare the empirical fit of three psychological models of decision-making. A simple model of category budgeting fits the data well, with models of loss aversion and salience both capturing important features of the time series.

  14. Study on the Growth Mechanism of K2Ti4O9 Crystal

    NASA Astrophysics Data System (ADS)

    Zhou, Xuesong; Fan, Jing; Wei, Xiaoli; Shen, Yi; Meng, Yanzhi

    2018-04-01

    Potassium hexatitanate (K2Ti4O9) whiskers were prepared by the kneading-drying-calcination method. After the preparation of products under different calcination temperatures and holding times, their morphology and structure were characterized by thermogravimetric and differential thermal, X-ray diffraction (XRD), scanning electron microscopy and transmission electron microscopy. The XRD analysis showed that the reaction mixture was completely converted to K2Ti4O9 crystals at 800 °C when the T/K ratio was 3. Based on the analysis of LS (liquid-solid) growth mechanism, the corresponding transformation reaction mechanism during the roasting was elucidated. K2Ti4O9 whiskers grow mainly through the parallel action at a low temperature. With the increase in temperature, the series effect is obvious.

  15. Motion-seeded object-based attention for dynamic visual imagery

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Kim, Kyungnam

    2017-05-01

    This paper† describes a novel system that finds and segments "objects of interest" from dynamic imagery (video) that (1) processes each frame using an advanced motion algorithm that pulls out regions that exhibit anomalous motion, and (2) extracts the boundary of each object of interest using a biologically-inspired segmentation algorithm based on feature contours. The system uses a series of modular, parallel algorithms, which allows many complicated operations to be carried out by the system in a very short time, and can be used as a front-end to a larger system that includes object recognition and scene understanding modules. Using this method, we show 90% accuracy with fewer than 0.1 false positives per frame of video, which represents a significant improvement over detection using a baseline attention algorithm.

  16. FUNGIBILITY AND CONSUMER CHOICE: EVIDENCE FROM COMMODITY PRICE SHOCKS*

    PubMed Central

    Hastings, Justine S.; Shapiro, Jesse M.

    2015-01-01

    We formulate a test of the fungibility of money based on parallel shifts in the prices of different quality grades of a commodity. We embed the test in a discrete-choice model of product quality choice and estimate the model using panel microdata on gasoline purchases. We find that when gasoline prices rise consumers substitute to lower octane gasoline, to an extent that cannot be explained by income effects. Across a wide range of specifications, we consistently reject the null hypothesis that households treat “gas money” as fungible with other income. We compare the empirical fit of three psychological models of decision-making. A simple model of category budgeting fits the data well, with models of loss aversion and salience both capturing important features of the time series. PMID:26937053

  17. Utilization management in radiology, part 2: perspectives and future directions.

    PubMed

    Duszak, Richard; Berlin, Jonathan W

    2012-10-01

    Increased utilization of medical imaging in the early part of the last decade has resulted in numerous efforts to reduce associated spending. Recent initiatives have focused on managing utilization with radiology benefits managers and real-time order entry decision support systems. Although these approaches might seem mutually exclusive and their application to radiology appears unique, the historical convergence and broad acceptance of both programs within the pharmacy sector may offer parallels for their potential future in medical imaging. In this second installment of a two-part series, anticipated trends in radiology utilization management are reviewed. Perspectives on current and future potential roles of radiologists in such initiatives are discussed, particularly in light of emerging physician payment models. Copyright © 2012 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  18. Fast parallel approach for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2009-12-01

    Two-dimensional fast Gabor transform algorithms are useful for real-time applications due to the high computational complexity of the traditional 2-D complex-valued discrete Gabor transform (CDGT). This paper presents two block time-recursive algorithms for 2-D DHT-based real-valued discrete Gabor transform (RDGT) and its inverse transform and develops a fast parallel approach for the implementation of the two algorithms. The computational complexity of the proposed parallel approach is analyzed and compared with that of the existing 2-D CDGT algorithms. The results indicate that the proposed parallel approach is attractive for real time image processing.

  19. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  20. Focal mechanism of the seismic series prior to the 2011 El Hierro eruption

    NASA Astrophysics Data System (ADS)

    del Fresno, C.; Buforn, E.; Cesca, S.; Domínguez Cerdeña, I.

    2015-12-01

    The onset of the submarine eruption of El Hierro (10-Oct-2011) was preceded by three months of low-magnitude seismicity (Mw<4.0) characterized by a well documented hypocenter migration from the center to the south of the island. Seismic sources of this series have been studied in order to understand the physical process of magma migration. Different methodologies were used to obtain focal mechanisms of largest shocks. Firstly, we have estimated the joint fault plane solutions for 727 shocks using first motion P polarities to infer the stress pattern of the sequence and to determine the time evolution of principle axes orientation. Results show almost vertical T-axes during the first two months of the series and horizontal P-axes on N-S direction coinciding with the migration. Secondly, a point source MT inversion was performed with data of the largest 21 earthquakes of the series (M>3.5). Amplitude spectra was fitted at local distances (<20km). Reliability and stability of the results were evaluated with synthetic data. Results show a change in the focal mechanism pattern within the first days of October, varying from complex sources of higher non-double-couple components before that date to a simpler strike-slip mechanism with horizontal tension axes on E-W direction the week prior to the eruption onset. A detailed study was carried out for the 8 October 2011 earthquake (Mw=4.0). Focal mechanism was retrieved using a MT inversion at regional and local distances. Results indicate an important component of strike-slip fault and null isotropic component. The stress pattern obtained corresponds to horizontal compression in a NNW-SSE direction, parallel to the southern ridge of the island, and a quasi-horizontal extension in an EW direction. Finally, a simple source time function of 0.3s has been estimated for this shock using the Empirical Green function methodology.

  1. Time Parallel Solution of Linear Partial Differential Equations on the Intel Touchstone Delta Supercomputer

    NASA Technical Reports Server (NTRS)

    Toomarian, N.; Fijany, A.; Barhen, J.

    1993-01-01

    Evolutionary partial differential equations are usually solved by decretization in time and space, and by applying a marching in time procedure to data and algorithms potentially parallelized in the spatial domain.

  2. Improving the knowledge about dissolved oxygen and chlorophyll variability at ESTOC by using autonomous vehicles.

    NASA Astrophysics Data System (ADS)

    Cianca, A.; Caudet, E.; Vega, D.; Barrera, C.; Hernandez Brito, J.

    2016-02-01

    The European Station for Time Series in the Ocean, Canary Islands "ESTOC" is located in the Eastern Subtropical North Atlantic Gyre (29'10ºN, 15'30ºW). ESTOC started operations in 1994 based on a monthly ship-based sampling, in addition to hydrographic and sediment trap moorings. Since 2002, ESTOC is part of the European network for deep sea ocean observatories through several projects, among others ANIMATE (Atlantic Network of Interdisciplinary Moorings and Time-series for Europe), EuroSITES (European Ocean Observatory Network) and Fixed point Open Ocean Observatory network (FixO3). The main purpose of these projects was to improve the time-resolution of the biogeochemical measurements through moored biogeochemical sensors. Additionally, ESTOC is included in the Marine-Maritime observational network of the Macaronesian region, which is supported by the European overseas territories programs since 2009. This network aims to increase the quantity and quality of marine environmental observations. The goal is to understand phenomena which impact in the environment, and consequently at the socio-economy of the region to attempt their prediction. With this purpose, ESTOC has included the use of autonomous vehicles "glider" in order to increase the observational resolution and, by comparison with the parallel observational programs, to study the biogeochemical processes at different time scale resolutions. This study investigates the time variability of the dissolved oxygen and chlorophyll distributions in the water column focusing on the diel cycle, looking at the relevance of this variability in the already known seasonal distributions. Our interest is assessing net community production and remineralization rates through the use of oxygen variations, establishing the relationship between the DO anomalies values and those from the chlorophyll distribution in the water column.

  3. Determination of battery stability with advanced diagnostics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamb, Joshua; Torres-Castro, Loraine; Orendorff, Christopher

    Lithium ion batteries for use in battery electric vehicles (BEVs) has seen considerable expansion over the last several years. It is expected that market share and the total number of BEVs will continue to increase over coming years and that there will be changes in the environmental and use conditions for BEV batteries. Specifically aging of the batteries and exposure to an increased number of crash conditions presents a distinct possibility that batteries may be in an unknown state posing danger to the operator, emergency response personnel and other support personnel. The present work expands on earlier efforts to exploremore » the ability to rapidly monitor using impedance spectroscopy techniques and characterize the state of different battery systems during both typical operations and under abusive conditions. The work has found that it is possible to detect key changes in performance for strings of up to four cells in both series and parallel configurations for both typical and abusive response. As a method the sensitivity for detecting change is enhanced for series configurations. For parallel configurations distinct changes are more difficult to ascertain, but under abusive conditions and for key frequencies it is feasible to use current rapid impedance techniques to identify change. The work has also found it feasible to use rapid impedance as an evaluation method for underload conditions, especially for series strings of cells.« less

  4. Solid oxide fuel cell generator

    DOEpatents

    Di Croce, A. Michael; Draper, Robert

    1993-11-02

    A solid oxide fuel cell generator has a plenum containing at least two rows of spaced apart, annular, axially elongated fuel cells. An electrical conductor extending between adjacent rows of fuel cells connects the fuel cells of one row in parallel with each other and in series with the fuel cells of the adjacent row.

  5. Stuart Appleton Courtis: Tester, Reformer and Progressive.

    ERIC Educational Resources Information Center

    Johanningmeier, E. V.

    The career of Stuart Appleton Courtis in the growth of testing and educational measurement parallels the development of progressive education in the first half of the twentieth century. In 1909 he developed the standardized Courtis Arithmetic Test, Series A, the first objective test used in any city public schools. Continuing his work in testing,…

  6. An Alternative Approach to Capacitors in Complex Arrangements

    ERIC Educational Resources Information Center

    Atkin, Keith

    2012-01-01

    Examples of capacitive circuits easily reducible to series and parallel combinations abound in the textbooks but students are rarely exposed to examples where such simple procedures are apparently impossible. This paper extends that of a previous contributor by showing how the delta-star theorem of network theory can resolve such difficulties.…

  7. Using Inquiry-Based Instruction for Teaching Science to Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Aydeniz, Mehmet; Cihak, David F.; Graham, Shannon C.; Retinger, Larryn

    2012-01-01

    The purpose of this study was to examine the effects of inquiry-based science instruction for five elementary students with learning disabilities (LD). Students participated in a series of inquiry-based activities targeting conceptual and application-based understanding of simple electric circuits, conductors and insulators, parallel circuits, and…

  8. Marijuana: The Real Story. It's Your Choice!

    ERIC Educational Resources Information Center

    Stronck, David R.

    This informational book on marijuana is part of a series of three interactive books on tobacco, alcohol, and marijuana; three informational books containing parallel content; and three teacher guides designed to give students in grades five through eight practice in using the information and skills presented in the books. The goal of this book and…

  9. Equity from a Vocational Education Research Perspective. Research and Development Series No. 214E.

    ERIC Educational Resources Information Center

    Eliason, Nancy Carol

    Female participation continues to increase in postsecondary vocational education and the labor market. This growth has paralleled increased funding under the Vocational Education Amendments of 1976 for sex-equity related research and demonstration activities. Funding has not, however, kept pace with needs of institutions trying to ensure equal…

  10. Converting Sunlight to Electricity--Some Practical Concerns

    ERIC Educational Resources Information Center

    Roman, Harry T.

    2005-01-01

    A photovoltaic panel can convert sunlight directly into electricity. If one connects enough of them in a series-parallel arrangement called a solar array, they can provide about half of a home's annual electricity needs. The panels comprise specially treated electronic materials that when exposed to sunlight will give up electrons freely, and…

  11. Tobacco: The Real Story. It's Your Choice.

    ERIC Educational Resources Information Center

    Stronck, David R.

    This informational book on tobacco is part of a series of three interactive books on tobacco, alcohol, and marijuana; three informational books containing parallel content; and three teacher guides designed to give students in grades five through eight practice in using the information and skills presented in the books. The goal of this book and…

  12. A Convenient Storage Rack for Graduated Cylinders

    ERIC Educational Resources Information Center

    Love, Brian

    2004-01-01

    An attempt is made to find a solution to the occasional problem of a need for storing large numbers of graduated cylinders in many teaching and research laboratories. A design, which involves the creation of a series of parallel channels that are used to suspend inverted graduated cylinders by their bases, is proposed.

  13. Work at the Uddevalla Volvo Plant from the Perspective of the Demand-Control Model

    ERIC Educational Resources Information Center

    Lottridge, Danielle

    2004-01-01

    The Uddevalla Volvo plant represents a different paradigm for automotive assembly. In parallel-flow work, self-managed work groups assemble entire automobiles with comparable productivity as conventional series-flow assembly lines. From the perspective of the demand-control model, operators at the Uddevalla plant have low physical and timing…

  14. Experimental fungicidal control of blister rust on sugar pine in California

    Treesearch

    Clarence R. Quick

    1964-01-01

    Parallel series of exploratory experiments with antifungal antibiotics and conventional chemical fungicides for control of blister rust on sugar pine were started in northern California in 1959. Several fungicides, both antibiotic and conventional, appear slightly systemic, but all tested materials are more effective when sprayed directly on infected tissues....

  15. Solid oxide fuel cell generator

    DOEpatents

    Di Croce, A.M.; Draper, R.

    1993-11-02

    A solid oxide fuel cell generator has a plenum containing at least two rows of spaced apart, annular, axially elongated fuel cells. An electrical conductor extending between adjacent rows of fuel cells connects the fuel cells of one row in parallel with each other and in series with the fuel cells of the adjacent row. 5 figures.

  16. Solid-state energy storage module employing integrated interconnect board

    DOEpatents

    Rouillard, Jean; Comte, Christophe; Daigle, Dominik; Hagen, Ronald A.; Knudson, Orlin B.; Morin, Andre; Ranger, Michel; Ross, Guy; Rouillard, Roger; St-Germain, Philippe; Sudano, Anthony; Turgeon, Thomas A.

    2000-01-01

    The present invention is directed to an improved electrochemical energy storage device. The electrochemical energy storage device includes a number of solid-state, thin-film electrochemical cells which are selectively interconnected in series or parallel through use of an integrated interconnect board. The interconnect board is typically disposed within a sealed housing which also houses the electrochemical cells, and includes a first contact and a second contact respectively coupled to first and second power terminals of the energy storage device. The interconnect board advantageously provides for selective series or parallel connectivity with the electrochemical cells, irrespective of electrochemical cell position within the housing. In one embodiment, a sheet of conductive material is processed by employing a known milling, stamping, or chemical etching technique to include a connection pattern which provides for flexible and selective interconnecting of individual electrochemical cells within the housing, which may be a hermetically sealed housing. Fuses and various electrical and electro-mechanical devices, such as bypass, equalization, and communication devices for example, may also be mounted to the interconnect board and selectively connected to the electrochemical cells.

  17. Numerical modelling of series-parallel cooling systems in power plant

    NASA Astrophysics Data System (ADS)

    Regucki, Paweł; Lewkowicz, Marek; Kucięba, Małgorzata

    2017-11-01

    The paper presents a mathematical model allowing one to study series-parallel hydraulic systems like, e.g., the cooling system of a power boiler's auxiliary devices or a closed cooling system including condensers and cooling towers. The analytical approach is based on a set of non-linear algebraic equations solved using numerical techniques. As a result of the iterative process, a set of volumetric flow rates of water through all the branches of the investigated hydraulic system is obtained. The calculations indicate the influence of changes in the pipeline's geometrical parameters on the total cooling water flow rate in the analysed installation. Such an approach makes it possible to analyse different variants of the modernization of the studied systems, as well as allowing for the indication of its critical elements. Basing on these results, an investor can choose the optimal variant of the reconstruction of the installation from the economic point of view. As examples of such a calculation, two hydraulic installations are described. One is a boiler auxiliary cooling installation including two screw ash coolers. The other is a closed cooling system consisting of cooling towers and condensers.

  18. Exploring Fuel-Saving Potential of Long-Haul Truck Hybridization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Zhiming; LaClair, Tim J.; Smith, David E.

    We report our comparisons on the simulated fuel economy for parallel, series, and dual-mode hybrid electric long-haul trucks, in addition to a conventional powertrain configuration, powered by a commercial 2010-compliant 15-L diesel engine over a freeway-dominated heavy-duty truck driving cycle. The driving cycle was obtained by measurement during normal driving conditions. The results indicated that both parallel and dual-mode hybrid powertrains were capable of improving fuel economy by 7% to 8%. But there was no significant fuel economy benefit for the series hybrid truck because of internal inefficiencies in energy exchange. When reduced aerodynamic drag and tire rolling resistance weremore » combined with hybridization, there was a synergistic fuel economy benefit for appropriate hybrids that increased the fuel economy benefit to more than 15%. Long-haul hybrid trucks with reduced aerodynamic drag and rolling resistance offered lower peak engine loads, better kinetic energy recovery, and reduced average engine power demand. Therefore, it is expected that hybridization with load reduction technologies offers important potential fuel energy savings for future long-haul trucks.« less

  19. Effect of Topology Structure on the Output Performance of an Automobile Exhaust Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Fang, W.; Quan, S. H.; Xie, C. J.; Ran, B.; Li, X. L.; Wang, L.; Jiao, Y. T.; Xu, T. W.

    2017-05-01

    The majority of the thermal energy released in an automotive internal combustion cycle is exhausted as waste heat through the tail pipe. This paper describes an automobile exhaust thermoelectric generator (AETEG), designed to recycle automobile waste heat. A model of the output characteristics of each thermoelectric device was established by testing their open circuit voltage and internal resistance, and combining the output characteristics. To better describe the relationship, the physical model was transformed into a topological model. The connection matrix was used to describe the relationship between any two thermoelectric devices in the topological structure. Different topological structures produced different power outputs; their output power was maximised by using an iterative algorithm to optimize the series-parallel electrical topology structure. The experimental results have shown that the output power of the optimal topology structure increases by 18.18% and 29.35% versus that of a pure in-series or parallel topology, respectively, and by 10.08% versus a manually defined structure (based on user experience). The thermoelectric conversion device increased energy efficiency by 40% when compared with a traditional car.

  20. Physics based modeling of a series parallel battery pack for asymmetry analysis, predictive control and life extension

    NASA Astrophysics Data System (ADS)

    Ganesan, Nandhini; Basu, Suman; Hariharan, Krishnan S.; Kolake, Subramanya Mayya; Song, Taewon; Yeo, Taejung; Sohn, Dong Kee; Doo, Seokgwang

    2016-08-01

    Lithium-Ion batteries used for electric vehicle applications are subject to large currents and various operation conditions, making battery pack design and life extension a challenging problem. With increase in complexity, modeling and simulation can lead to insights that ensure optimal performance and life extension. In this manuscript, an electrochemical-thermal (ECT) coupled model for a 6 series × 5 parallel pack is developed for Li ion cells with NCA/C electrodes and validated against experimental data. Contribution of the cathode to overall degradation at various operating conditions is assessed. Pack asymmetry is analyzed from a design and an operational perspective. Design based asymmetry leads to a new approach of obtaining the individual cell responses of the pack from an average ECT output. Operational asymmetry is demonstrated in terms of effects of thermal gradients on cycle life, and an efficient model predictive control technique is developed. Concept of reconfigurable battery pack is studied using detailed simulations that can be used for effective monitoring and extension of battery pack life.

Top