Sample records for modulated energy algorithms

  1. Energy-efficient routing, modulation and spectrum allocation in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Tan, Yanxia; Gu, Rentao; Ji, Yuefeng

    2017-07-01

    With tremendous growth in bandwidth demand, energy consumption problem in elastic optical networks (EONs) becomes a hot topic with wide concern. The sliceable bandwidth-variable transponder in EON, which can transmit/receive multiple optical flows, was recently proposed to improve a transponder's flexibility and save energy. In this paper, energy-efficient routing, modulation and spectrum allocation (EE-RMSA) in EONs with sliceable bandwidth-variable transponder is studied. To decrease the energy consumption, we develop a Mixed Integer Linear Programming (MILP) model with corresponding EE-RMSA algorithm for EONs. The MILP model jointly considers the modulation format and optical grooming in the process of routing and spectrum allocation with the objective of minimizing the energy consumption. With the help of genetic operators, the EE-RMSA algorithm iteratively optimizes the feasible routing path, modulation format and spectrum resources solutions by explore the whole search space. In order to save energy, the optical-layer grooming strategy is designed to transmit the lightpath requests. Finally, simulation results verify that the proposed scheme is able to reduce the energy consumption of the network while maintaining the blocking probability (BP) performance compare with the existing First-Fit-KSP algorithm, Iterative Flipping algorithm and EAMGSP algorithm especially in large network topology. Our results also demonstrate that the proposed EE-RMSA algorithm achieves almost the same performance as MILP on an 8-node network.

  2. Characterizing isolated attosecond pulses with angular streaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Siqi; Guo, Zhaoheng; Coffee, Ryan N.

    Here, we present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.

  3. Characterizing isolated attosecond pulses with angular streaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sigi; Guo, Zhaoheng; Coffee, Ryan N.

    We present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.

  4. Characterizing isolated attosecond pulses with angular streaking

    DOE PAGES

    Li, Siqi; Guo, Zhaoheng; Coffee, Ryan N.; ...

    2018-02-12

    Here, we present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.

  5. Characterizing isolated attosecond pulses with angular streaking

    DOE PAGES

    Li, Sigi; Guo, Zhaoheng; Coffee, Ryan N.; ...

    2018-02-13

    We present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.

  6. DEVELOPMENT OF A LOW-COST INFERENTIAL NATURAL GAS ENERGY FLOW RATE PROTOTYPE RETROFIT MODULE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E. Kelner; T.E. Owen; D.L. George

    2004-03-01

    In 1998, Southwest Research Institute{reg_sign} began a multi-year project co-funded by the Gas Research Institute (GRI) and the U.S. Department of Energy. The project goal is to develop a working prototype instrument module for natural gas energy measurement. The module will be used to retrofit a natural gas custody transfer flow meter for energy measurement, at a cost an order of magnitude lower than a gas chromatograph. Development and evaluation of the prototype retrofit natural gas energy flow meter in 2000-2001 included: (1) evaluation of the inferential gas energy analysis algorithm using supplemental gas databases and anticipated worst-case gas mixtures;more » (2) identification and feasibility review of potential sensing technologies for nitrogen diluent content; (3) experimental performance evaluation of infrared absorption sensors for carbon dioxide diluent content; and (4) procurement of a custom ultrasonic transducer and redesign of the ultrasonic pulse reflection correlation sensor for precision speed-of-sound measurements. A prototype energy meter module containing improved carbon dioxide and speed-of-sound sensors was constructed and tested in the GRI Metering Research Facility at SwRI. Performance of this module using transmission-quality natural gas and gas containing supplemental carbon dioxide up to 9 mol% resulted in gas energy determinations well within the inferential algorithm worst-case tolerance of {+-}2.4 Btu/scf (nitrogen diluent gas measured by gas chromatograph). A two-week field test was performed at a gas-fired power plant to evaluate the inferential algorithm and the data acquisition requirements needed to adapt the prototype energy meter module to practical field site conditions.« less

  7. Mathematical analysis and coordinated current allocation control in battery power module systems

    NASA Astrophysics Data System (ADS)

    Han, Weiji; Zhang, Liang

    2017-12-01

    As the major energy storage device and power supply source in numerous energy applications, such as solar panels, wind plants, and electric vehicles, battery systems often face the issue of charge imbalance among battery cells/modules, which can accelerate battery degradation, cause more energy loss, and even incur fire hazard. To tackle this issue, various circuit designs have been developed to enable charge equalization among battery cells/modules. Recently, the battery power module (BPM) design has emerged to be one of the promising solutions for its capability of independent control of individual battery cells/modules. In this paper, we propose a new current allocation method based on charging/discharging space (CDS) for performance control in BPM systems. Based on the proposed method, the properties of CDS-based current allocation with constant parameters are analyzed. Then, real-time external total power requirement is taken into account and an algorithm is developed for coordinated system performance control. By choosing appropriate control parameters, the desired system performance can be achieved by coordinating the module charge balance and total power efficiency. Besides, the proposed algorithm has complete analytical solutions, and thus is very computationally efficient. Finally, the efficacy of the proposed algorithm is demonstrated using simulations.

  8. Amber Plug-In for Protein Shop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliva, Ricardo

    2004-05-10

    The Amber Plug-in for ProteinShop has two main components: an AmberEngine library to compute the protein energy models, and a module to solve the energy minimization problem using an optimization algorithm in the OPTI-+ library. Together, these components allow the visualization of the protein folding process in ProteinShop. AmberEngine is a object-oriented library to compute molecular energies based on the Amber model. The main class is called ProteinEnergy. Its main interface methods are (1) "init" to initialize internal variables needed to compute the energy. (2) "eval" to evaluate the total energy given a vector of coordinates. Additional methods allow themore » user to evaluate the individual components of the energy model (bond, angle, dihedral, non-bonded-1-4, and non-bonded energies) and to obtain the energy of each individual atom. The Amber Engine library source code includes examples and test routines that illustrate the use of the library in stand alone programs. The energy minimization module uses the AmberEngine library and the nonlinear optimization library OPT++. OPT++ is open source software available under the GNU Lesser General Public License. The minimization module currently makes use of the LBFGS optimization algorithm in OPT++ to perform the energy minimization. Future releases may give the user a choice of other algorithms available in OPT++.« less

  9. Time-frequency analysis of time-varying modulated signals based on improved energy separation by iterative generalized demodulation

    NASA Astrophysics Data System (ADS)

    Feng, Zhipeng; Chu, Fulei; Zuo, Ming J.

    2011-03-01

    Energy separation algorithm is good at tracking instantaneous changes in frequency and amplitude of modulated signals, but it is subject to the constraints of mono-component and narrow band. In most cases, time-varying modulated vibration signals of machinery consist of multiple components, and have so complicated instantaneous frequency trajectories on time-frequency plane that they overlap in frequency domain. For such signals, conventional filters fail to obtain mono-components of narrow band, and their rectangular decomposition of time-frequency plane may split instantaneous frequency trajectories thus resulting in information loss. Regarding the advantage of generalized demodulation method in decomposing multi-component signals into mono-components, an iterative generalized demodulation method is used as a preprocessing tool to separate signals into mono-components, so as to satisfy the requirements by energy separation algorithm. By this improvement, energy separation algorithm can be generalized to a broad range of signals, as long as the instantaneous frequency trajectories of signal components do not intersect on time-frequency plane. Due to the good adaptability of energy separation algorithm to instantaneous changes in signals and the mono-component decomposition nature of generalized demodulation, the derived time-frequency energy distribution has fine resolution and is free from cross term interferences. The good performance of the proposed time-frequency analysis is illustrated by analyses of a simulated signal and the on-site recorded nonstationary vibration signal of a hydroturbine rotor during a shut-down transient process, showing that it has potential to analyze time-varying modulated signals of multi-components.

  10. Trigger and Reconstruction Algorithms for the Japanese Experiment Module- Extreme Universe Space Observatory (JEM-EUSO)

    NASA Technical Reports Server (NTRS)

    Adams, J. H., Jr.; Andreev, Valeri; Christl, M. J.; Cline, David B.; Crawford, Hank; Judd, E. G.; Pennypacker, Carl; Watts, J. W.

    2007-01-01

    The JEM-EUSO collaboration intends to study high energy cosmic ray showers using a large downward looking telescope mounted on the Japanese Experiment Module of the International Space Station. The telescope focal plane is instrumented with approx.300k pixels operating as a digital camera, taking snapshots at approx. 1MHz rate. We report an investigation of the trigger and reconstruction efficiency of various algorithms based on time and spatial analysis of the pixel images. Our goal is to develop trigger and reconstruction algorithms that will allow the instrument to detect energies low enough to connect smoothly to ground-based observations.

  11. The Modular Modeling System (MMS): User's Manual

    USGS Publications Warehouse

    Leavesley, G.H.; Restrepo, Pedro J.; Markstrom, S.L.; Dixon, M.; Stannard, L.G.

    1996-01-01

    The Modular Modeling System (MMS) is an integrated system of computer software that has been developed to provide the research and operational framework needed to support development, testing, and evaluation of physical-process algorithms and to facilitate integration of user-selected sets of algorithms into operational physical-process models. MMS uses a module library that contains modules for simulating a variety of water, energy, and biogeochemical processes. A model is created by selectively coupling the most appropriate modules from the library to create a 'suitable' model for the desired application. Where existing modules do not provide appropriate process algorithms, new modules can be developed. The MMS user's manual provides installation instructions and a detailed discussion of system concepts, module development, and model development and application using the MMS graphical user interface.

  12. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Gongadze, A.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Solovov, V.; Van Esch, P.; Zeitelhack, K.

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/~andrei/

  13. WE-EF-207-09: Single-Scan Dual-Energy CT Using Primary Modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrongolo, M; Zhu, L

    Purpose: Compared with conventional CT, dual energy CT (DECT) provides better material differentiation but requires projection data with two different effective x-ray spectra. Current DECT scanners use either a two-scan setting or costly imaging components, which are not feasible or available on open-gantry cone-beam CT systems. We propose a hardware-based method which utilizes primary modulation to enable single-scan DECT on a conventional CT scanner. The CT imaging geometry of primary modulation is identical to that used in our previous method for scatter removal, making it possible for future combination with effective scatter correction on the same CT scanner. Methods: Wemore » insert an attenuation sheet with a spatially-varying pattern - primary modulator-between the x-ray source and the imaged object. During the CT scan, the modulator selectively hardens the x-ray beam at specific detector locations. Thus, the proposed method simultaneously acquires high and low energy data. High and low energy CT images are then reconstructed from projections with missing data via an iterative CT reconstruction algorithm with gradient weighting. Proof-of-concept studies are performed using a copper modulator on a cone-beam CT system. Results: Our preliminary results on the Catphan(c) 600 phantom indicate that the proposed method for single-scan DECT is able to successfully generate high-quality high and low energy CT images and distinguish different materials through basis material decomposition. By applying correction algorithms and using all of the acquired projection data, we can reconstruct a single CT image of comparable image quality to conventional CT images, i.e., without primary modulation. Conclusion: This work shows great promise in using a primary modulator to perform high-quality single-scan DECT imaging. Future studies will test method performance on anthropomorphic phantoms and perform quantitative analyses on image qualities and DECT decomposition accuracy. We will use simulations to optimize the modulator material and geometry parameters.« less

  14. Classical Optimal Control for Energy Minimization Based On Diffeomorphic Modulation under Observable-Response-Preserving Homotopy.

    PubMed

    Soley, Micheline B; Markmann, Andreas; Batista, Victor S

    2018-06-12

    We introduce the so-called "Classical Optimal Control Optimization" (COCO) method for global energy minimization based on the implementation of the diffeomorphic modulation under observable-response-preserving homotopy (DMORPH) gradient algorithm. A probe particle with time-dependent mass m( t;β) and dipole μ( r, t;β) is evolved classically on the potential energy surface V( r) coupled to an electric field E( t;β), as described by the time-dependent density of states represented on a grid, or otherwise as a linear combination of Gaussians generated by the k-means clustering algorithm. Control parameters β defining m( t;β), μ( r, t;β), and E( t;β) are optimized by following the gradients of the energy with respect to β, adapting them to steer the particle toward the global minimum energy configuration. We find that the resulting COCO algorithm is capable of resolving near-degenerate states separated by large energy barriers and successfully locates the global minima of golf potentials on flat and rugged surfaces, previously explored for testing quantum annealing methodologies and the quantum optimal control optimization (QuOCO) method. Preliminary results show successful energy minimization of multidimensional Lennard-Jones clusters. Beyond the analysis of energy minimization in the specific model systems investigated, we anticipate COCO should be valuable for solving minimization problems in general, including optimization of parameters in applications to machine learning and molecular structure determination.

  15. SU-FF-T-668: A Simple Algorithm for Range Modulation Wheel Design in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, X; Nazaryan, Vahagn; Gueye, Paul

    2009-06-01

    Purpose: To develop a simple algorithm in designing the range modulation wheel to generate a very smooth Spread-Out Bragg peak (SOBP) for proton therapy.Method and Materials: A simple algorithm has been developed to generate the weight factors in corresponding pristine Bragg peaks which composed a smooth SOBP in proton therapy. We used a modified analytical Bragg peak function based on Monte Carol simulation tool-kits of Geant4 as pristine Bragg peaks input in our algorithm. A simple METLAB(R) Quad Program was introduced to optimize the cost function in our algorithm. Results: We found out that the existed analytical function of Braggmore » peak can't directly use as pristine Bragg peak dose-depth profile input file in optimization of the weight factors since this model didn't take into account of the scattering factors introducing from the range shifts in modifying the proton beam energies. We have done Geant4 simulations for proton energy of 63.4 MeV with a 1.08 cm SOBP for variation of pristine Bragg peaks which composed this SOBP and modified the existed analytical Bragg peak functions for their peak heights, ranges of R{sub 0}, and Gaussian energies {sigma}{sub E}. We found out that 19 pristine Bragg peaks are enough to achieve a flatness of 1.5% of SOBP which is the best flatness in the publications. Conclusion: This work develops a simple algorithm to generate the weight factors which is used to design a range modulation wheel to generate a smooth SOBP in protonradiation therapy. We have found out that a medium number of pristine Bragg peaks are enough to generate a SOBP with flatness less than 2%. It is potential to generate data base to store in the treatment plan to produce a clinic acceptable SOBP by using our simple algorithm.« less

  16. Energy scavenging using piezoelectric sensors to power in pavement intelligent vehicle detection systems

    NASA Astrophysics Data System (ADS)

    Parhad, Ashutosh

    Intelligent transportation systems use in-pavement inductive loop sensors to collect real time traffic data. This method is very expensive in terms of installation and maintenance. Our research is focused on developing advanced algorithms capable of generating high amounts of energy that can charge a battery. This electromechanical energy conversion is an optimal way of energy scavenging that makes use of piezoelectric sensors. The power generated is sufficient to run the vehicle detection module that has several sensors embedded together. To achieve these goals, we have developed a simulation module using software's like LabVIEW and Multisim. The simulation module recreates a practical scenario that takes into consideration vehicle weight, speed, wheel width and frequency of the traffic.

  17. Modeling and analysis of solar distributed generation

    NASA Astrophysics Data System (ADS)

    Ortiz Rivera, Eduardo Ivan

    Recent changes in the global economy are creating a big impact in our daily life. The price of oil is increasing and the number of reserves are less every day. Also, dramatic demographic changes are impacting the viability of the electric infrastructure and ultimately the economic future of the industry. These are some of the reasons that many countries are looking for alternative energy to produce electric energy. The most common form of green energy in our daily life is solar energy. To convert solar energy into electrical energy is required solar panels, dc-dc converters, power control, sensors, and inverters. In this work, a photovoltaic module, PVM, model using the electrical characteristics provided by the manufacturer data sheet is presented for power system applications. Experimental results from testing are showed, verifying the proposed PVM model. Also in this work, three maximum power point tracker, MPPT, algorithms would be presented to obtain the maximum power from a PVM. The first MPPT algorithm is a method based on the Rolle's and Lagrange's Theorems and can provide at least an approximate answer to a family of transcendental functions that cannot be solved using differential calculus. The second MPPT algorithm is based on the approximation of the proposed PVM model using fractional polynomials where the shape, boundary conditions and performance of the proposed PVM model are satisfied. The third MPPT algorithm is based in the determination of the optimal duty cycle for a dc-dc converter and the previous knowledge of the load or load matching conditions. Also, four algorithms to calculate the effective irradiance level and temperature over a photovoltaic module are presented in this work. The main reasons to develop these algorithms are for monitoring climate conditions, the elimination of temperature and solar irradiance sensors, reductions in cost for a photovoltaic inverter system, and development of new algorithms to be integrated with maximum power point tracking algorithms. Finally, several PV power applications will be presented like circuit analysis for a load connected to two different PV arrays, speed control for a do motor connected to a PVM, and a novel single phase photovoltaic inverter system using the Z-source converter.

  18. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  19. Removal of eye blink artifacts in wireless EEG sensor networks using reduced-bandwidth canonical correlation analysis.

    PubMed

    Somers, Ben; Bertrand, Alexander

    2016-12-01

    Chronic, 24/7 EEG monitoring requires the use of highly miniaturized EEG modules, which only measure a few EEG channels over a small area. For improved spatial coverage, a wireless EEG sensor network (WESN) can be deployed, consisting of multiple EEG modules, which interact through short-distance wireless communication. In this paper, we aim to remove eye blink artifacts in each EEG channel of a WESN by optimally exploiting the correlation between EEG signals from different modules, under stringent communication bandwidth constraints. We apply a distributed canonical correlation analysis (CCA-)based algorithm, in which each module only transmits an optimal linear combination of its local EEG channels to the other modules. The method is validated on both synthetic and real EEG data sets, with emulated wireless transmissions. While strongly reducing the amount of data that is shared between nodes, we demonstrate that the algorithm achieves the same eye blink artifact removal performance as the equivalent centralized CCA algorithm, which is at least as good as other state-of-the-art multi-channel algorithms that require a transmission of all channels. Due to their potential for extreme miniaturization, WESNs are viewed as an enabling technology for chronic EEG monitoring. However, multi-channel analysis is hampered in WESNs due to the high energy cost for wireless communication. This paper shows that multi-channel eye blink artifact removal is possible with a significantly reduced wireless communication between EEG modules.

  20. Removal of eye blink artifacts in wireless EEG sensor networks using reduced-bandwidth canonical correlation analysis

    NASA Astrophysics Data System (ADS)

    Somers, Ben; Bertrand, Alexander

    2016-12-01

    Objective. Chronic, 24/7 EEG monitoring requires the use of highly miniaturized EEG modules, which only measure a few EEG channels over a small area. For improved spatial coverage, a wireless EEG sensor network (WESN) can be deployed, consisting of multiple EEG modules, which interact through short-distance wireless communication. In this paper, we aim to remove eye blink artifacts in each EEG channel of a WESN by optimally exploiting the correlation between EEG signals from different modules, under stringent communication bandwidth constraints. Approach. We apply a distributed canonical correlation analysis (CCA-)based algorithm, in which each module only transmits an optimal linear combination of its local EEG channels to the other modules. The method is validated on both synthetic and real EEG data sets, with emulated wireless transmissions. Main results. While strongly reducing the amount of data that is shared between nodes, we demonstrate that the algorithm achieves the same eye blink artifact removal performance as the equivalent centralized CCA algorithm, which is at least as good as other state-of-the-art multi-channel algorithms that require a transmission of all channels. Significance. Due to their potential for extreme miniaturization, WESNs are viewed as an enabling technology for chronic EEG monitoring. However, multi-channel analysis is hampered in WESNs due to the high energy cost for wireless communication. This paper shows that multi-channel eye blink artifact removal is possible with a significantly reduced wireless communication between EEG modules.

  1. Aerocapture Guidance Performance for the Neptune Orbiter

    NASA Technical Reports Server (NTRS)

    Masciarelli, James P.; Westhelle, Carlos H.; Graves, Claude A.

    2004-01-01

    A performance evaluation of the Hybrid Predictor corrector Aerocapture Scheme (HYPAS) guidance algorithm for aerocapture at Neptune is presented in this paper for a Mission to Neptune and the Neptune moon Triton'. This mission has several challenges not experienced in previous aerocapture guidance assessments. These challengers are a very high Neptune arrival speed, atmospheric exit into a high energy orbit about Neptune, and a very high ballistic coefficient that results in a low altitude acceleration capability when combined with the aeroshell LD. The evaluation includes a definition of the entry corridor, a comparison to the theoretical optimum performance, and guidance responses to variations in atmospheric density, aerodynamic coefficients and flight path angle for various vehicle configurations (ballistic numbers). The benefits of utilizing angle-of-attack modulation in addition to bank angle modulation to improve flight performance is also discussed. The results show that despite large sensitivities in apoapsis targeting, the algorithm performs within the allocated AV budget for the Neptune mission bank angle only modulation. The addition of angle-of-attack modulation with as little as 5 degrees of amplitude significantly improves the scatter in final orbit apoapsis. Although the angle-of-attack modulation complicates the vehicle design, the performance enhancement reduces aerocapture risk and reduces the propellant consumption needed to reach the high energy target orbit for a conventional propulsion system.

  2. Voltage scheduling for low power/energy

    NASA Astrophysics Data System (ADS)

    Manzak, Ali

    2001-07-01

    Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.

  3. Hierarchical MFMO Circuit Modules for an Energy-Efficient SDR DBF

    NASA Astrophysics Data System (ADS)

    Mar, Jeich; Kuo, Chi-Cheng; Wu, Shin-Ru; Lin, You-Rong

    The hierarchical multi-function matrix operation (MFMO) circuit modules are designed using coordinate rotations digital computer (CORDIC) algorithm for realizing the intensive computation of matrix operations. The paper emphasizes that the designed hierarchical MFMO circuit modules can be used to develop a power-efficient software-defined radio (SDR) digital beamformer (DBF). The formulas of the processing time for the scalable MFMO circuit modules implemented in field programmable gate array (FPGA) are derived to allocate the proper logic resources for the hardware reconfiguration. The hierarchical MFMO circuit modules are scalable to the changing number of array branches employed for the SDR DBF to achieve the purpose of power saving. The efficient reuse of the common MFMO circuit modules in the SDR DBF can also lead to energy reduction. Finally, the power dissipation and reconfiguration function in the different modes of the SDR DBF are observed from the experiment results.

  4. Sparsity constrained split feasibility for dose-volume constraints in inverse planning of intensity-modulated photon or proton therapy

    NASA Astrophysics Data System (ADS)

    Penfold, Scott; Zalas, Rafał; Casiraghi, Margherita; Brooke, Mark; Censor, Yair; Schulte, Reinhard

    2017-05-01

    A split feasibility formulation for the inverse problem of intensity-modulated radiation therapy treatment planning with dose-volume constraints included in the planning algorithm is presented. It involves a new type of sparsity constraint that enables the inclusion of a percentage-violation constraint in the model problem and its handling by continuous (as opposed to integer) methods. We propose an iterative algorithmic framework for solving such a problem by applying the feasibility-seeking CQ-algorithm of Byrne combined with the automatic relaxation method that uses cyclic projections. Detailed implementation instructions are furnished. Functionality of the algorithm was demonstrated through the creation of an intensity-modulated proton therapy plan for a simple 2D C-shaped geometry and also for a realistic base-of-skull chordoma treatment site. Monte Carlo simulations of proton pencil beams of varying energy were conducted to obtain dose distributions for the 2D test case. A research release of the Pinnacle 3 proton treatment planning system was used to extract pencil beam doses for a clinical base-of-skull chordoma case. In both cases the beamlet doses were calculated to satisfy dose-volume constraints according to our new algorithm. Examination of the dose-volume histograms following inverse planning with our algorithm demonstrated that it performed as intended. The application of our proposed algorithm to dose-volume constraint inverse planning was successfully demonstrated. Comparison with optimized dose distributions from the research release of the Pinnacle 3 treatment planning system showed the algorithm could achieve equivalent or superior results.

  5. Gas energy meter for inferential determination of thermophysical properties of a gas mixture at multiple states of the gas

    DOEpatents

    Morrow, Thomas B [San Antonio, TX; Kelner, Eric [San Antonio, TX; Owen, Thomas E [Helotes, TX

    2008-07-08

    A gas energy meter that acquires the data and performs the processing for an inferential determination of one or more gas properties, such as heating value, molecular weight, or density. The meter has a sensor module that acquires temperature, pressure, CO2, and speed of sound data. Data is acquired at two different states of the gas, which eliminates the need to determine the concentration of nitrogen in the gas. A processing module receives this data and uses it to perform a "two-state" inferential algorithm.

  6. ELF: An Extended-Lagrangian Free Energy Calculation Module for Multiple Molecular Dynamics Engines.

    PubMed

    Chen, Haochuan; Fu, Haohao; Shao, Xueguang; Chipot, Christophe; Cai, Wensheng

    2018-06-18

    Extended adaptive biasing force (eABF), a collective variable (CV)-based importance-sampling algorithm, has proven to be very robust and efficient compared with the original ABF algorithm. Its implementation in Colvars, a software addition to molecular dynamics (MD) engines, is, however, currently limited to NAMD and LAMMPS. To broaden the scope of eABF and its variants, like its generalized form (egABF), and make them available to other MD engines, e.g., GROMACS, AMBER, CP2K, and openMM, we present a PLUMED-based implementation, called extended-Lagrangian free energy calculation (ELF). This implementation can be used as a stand-alone gradient estimator for other CV-based sampling algorithms, such as temperature-accelerated MD (TAMD) and extended-Lagrangian metadynamics (MtD). ELF provides the end user with a convenient framework to help select the best-suited importance-sampling algorithm for a given application without any commitment to a particular MD engine.

  7. Optimal sensor placement for deployable antenna module health monitoring in SSPS using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Chen; Zhang, Xuepan; Huang, Xiaoqi; Cheng, ZhengAi; Zhang, Xinghua; Hou, Xinbin

    2017-11-01

    The concept of space solar power satellite (SSPS) is an advanced system for collecting solar energy in space and transmitting it wirelessly to earth. However, due to the long service life, in-orbit damage may occur in the structural system of SSPS. Therefore, sensor placement layouts for structural health monitoring should be firstly considered in this concept. In this paper, based on genetic algorithm, an optimal sensor placement method for deployable antenna module health monitoring in SSPS is proposed. According to the characteristics of the deployable antenna module, the designs of sensor placement are listed. Furthermore, based on effective independence method and effective interval index, a combined fitness function is defined to maximize linear independence in targeted modes while simultaneously avoiding redundant information at nearby positions. In addition, by considering the reliability of sensors located at deployable mechanisms, another fitness function is constituted. Moreover, the solution process of optimal sensor placement by using genetic algorithm is clearly demonstrated. At last, a numerical example about the sensor placement layout in a deployable antenna module of SSPS is presented, which by synthetically considering all the above mentioned performances. All results can illustrate the effectiveness and feasibility of the proposed sensor placement method in SSPS.

  8. Procedure of recovery of pin-by-pin fields of energy release in the core of VVER-type reactor for the BIPR-8 code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordienko, P. V., E-mail: gorpavel@vver.kiae.ru; Kotsarev, A. V.; Lizorkin, M. P.

    2014-12-15

    The procedure of recovery of pin-by-pin energy-release fields for the BIPR-8 code and the algorithm of the BIPR-8 code which is used in nodal computation of the reactor core and on which the recovery of pin-by-pin fields of energy release is based are briefly described. The description and results of the verification using the module of recovery of pin-by-pin energy-release fields and the TVS-M program are given.

  9. A General, Adaptive, Roadmap-Based Algorithm for Protein Motion Computation.

    PubMed

    Molloy, Kevin; Shehu, Amarda

    2016-03-01

    Precious information on protein function can be extracted from a detailed characterization of protein equilibrium dynamics. This remains elusive in wet and dry laboratories, as function-modulating transitions of a protein between functionally-relevant, thermodynamically-stable and meta-stable structural states often span disparate time scales. In this paper we propose a novel, robotics-inspired algorithm that circumvents time-scale challenges by drawing analogies between protein motion and robot motion. The algorithm adapts the popular roadmap-based framework in robot motion computation to handle the more complex protein conformation space and its underlying rugged energy surface. Given known structures representing stable and meta-stable states of a protein, the algorithm yields a time- and energy-prioritized list of transition paths between the structures, with each path represented as a series of conformations. The algorithm balances computational resources between a global search aimed at obtaining a global view of the network of protein conformations and their connectivity and a detailed local search focused on realizing such connections with physically-realistic models. Promising results are presented on a variety of proteins that demonstrate the general utility of the algorithm and its capability to improve the state of the art without employing system-specific insight.

  10. The autism diagnostic observation schedule, module 4: revised algorithm and standardized severity scores.

    PubMed

    Hus, Vanessa; Lord, Catherine

    2014-08-01

    The recently published Autism Diagnostic Observation Schedule, 2nd edition (ADOS-2) includes revised diagnostic algorithms and standardized severity scores for modules used to assess younger children. A revised algorithm and severity scores are not yet available for Module 4, used with verbally fluent adults. The current study revises the Module 4 algorithm and calibrates raw overall and domain totals to provide metrics of autism spectrum disorder (ASD) symptom severity. Sensitivity and specificity of the revised Module 4 algorithm exceeded 80 % in the overall sample. Module 4 calibrated severity scores provide quantitative estimates of ASD symptom severity that are relatively independent of participant characteristics. These efforts increase comparability of ADOS scores across modules and should facilitate efforts to examine symptom trajectories from toddler to adulthood.

  11. Extremum seeking-based optimization of high voltage converter modulator rise-time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheinker, Alexander; Bland, Michael; Krstic, Miroslav

    2013-02-01

    We digitally implement an extremum seeking (ES) algorithm, which optimizes the rise time of the output voltage of a high voltage converter modulator (HVCM) at the Los Alamos Neutron Science Center (LANSCE) HVCM test stand by iteratively, simultaneously tuning the first 8 switching edges of each of the three phase drive waveforms (24 variables total). We achieve a 50 μs rise time, which is reduction in half compared to the 100 μs achieved at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory. Considering that HVCMs typically operate with an output voltage of 100 kV, with a 60Hz repetitionmore » rate, the 50 μs rise time reduction will result in very significant energy savings. The ES algorithm will prove successful, despite the noisy measurements and cost calculations, confirming the theoretical results that the algorithm is not affected by noise whose frequency components are independent of the perturbing frequencies.« less

  12. Optimal trajectories for aeroassisted orbital transfer

    NASA Technical Reports Server (NTRS)

    Miele, A.; Venkataraman, P.

    1983-01-01

    Consideration is given to classical and minimax problems involved in aeroassisted transfer from high earth orbit (HEO) to low earth orbit (LEO). The transfer is restricted to coplanar operation, with trajectory control effected by means of lift modulation. The performance of the maneuver is indexed to the energy expenditure or, alternatively, the time integral of the heating rate. Firist-order optimality conditions are defined for the classical approach, as are a sequential gradient-restoration algorithm and a combined gradient-restoration algorithm. Minimization techniques are presented for the aeroassisted transfer energy consumption and time-delay integral of the heating rate, as well as minimization of the pressure. It is shown that the eigenvalues of the Jacobian matrix of the differential system is both stiff and unstable, implying that the sequential gradient restoration algorithm in its present version is unsuitable. A new method, involving a multipoint approach to the two-poing boundary value problem, is recommended.

  13. Incorporating partial shining effects in proton pencil-beam dose calculation

    NASA Astrophysics Data System (ADS)

    Li, Yupeng; Zhang, Xiaodong; Fwu Lii, Ming; Sahoo, Narayan; Zhu, Ron X.; Gillin, Michael; Mohan, Radhe

    2008-02-01

    A range modulator wheel (RMW) is an essential component in passively scattered proton therapy. We have observed that a proton beam spot may shine on multiple steps of the RMW. Proton dose calculation algorithms normally do not consider the partial shining effect, and thus overestimate the dose at the proximal shoulder of spread-out Bragg peak (SOBP) compared with the measurement. If the SOBP is adjusted to better fit the plateau region, the entrance dose is likely to be underestimated. In this work, we developed an algorithm that can be used to model this effect and to allow for dose calculations that better fit the measured SOBP. First, a set of apparent modulator weights was calculated without considering partial shining. Next, protons spilled from the accelerator reaching the modulator wheel were simplified as a circular spot of uniform intensity. A weight-splitting process was then performed to generate a set of effective modulator weights with the partial shining effect incorporated. The SOBPs of eight options, which are used to label different combinations of proton-beam energy and scattering devices, were calculated with the generated effective weights. Our algorithm fitted the measured SOBP at the proximal and entrance regions much better than the ones without considering partial shining effect for all SOBPs of the eight options. In a prostate patient, we found that dose calculation without considering partial shining effect underestimated the femoral head and skin dose.

  14. Energy management of fuel cell/solar cell/supercapacitor hybrid power source

    NASA Astrophysics Data System (ADS)

    Thounthong, Phatiphat; Chunkag, Viboon; Sethakul, Panarit; Sikkabut, Suwat; Pierfederici, Serge; Davat, Bernard

    This study presents an original control algorithm for a hybrid energy system with a renewable energy source, namely, a polymer electrolyte membrane fuel cell (PEMFC) and a photovoltaic (PV) array. A single storage device, i.e., a supercapacitor (ultracapacitor) module, is in the proposed structure. The main weak point of fuel cells (FCs) is slow dynamics because the power slope is limited to prevent fuel starvation problems, improve performance and increase lifetime. The very fast power response and high specific power of a supercapacitor complements the slower power output of the main source to produce the compatibility and performance characteristics needed in a load. The energy in the system is balanced by d.c.-bus energy regulation (or indirect voltage regulation). A supercapacitor module functions by supplying energy to regulate the d.c.-bus energy. The fuel cell, as a slow dynamic source in this system, supplies energy to the supercapacitor module in order to keep it charged. The photovoltaic array assists the fuel cell during daytime. To verify the proposed principle, a hardware system is realized with analog circuits for the fuel cell, solar cell and supercapacitor current control loops, and with numerical calculation (dSPACE) for the energy control loops. Experimental results with small-scale devices, namely, a PEMFC (1200 W, 46 A) manufactured by the Ballard Power System Company, a photovoltaic array (800 W, 31 A) manufactured by the Ekarat Solar Company and a supercapacitor module (100 F, 32 V) manufactured by the Maxwell Technologies Company, illustrate the excellent energy-management scheme during load cycles.

  15. Design of a compact low-power human-computer interaction equipment for hand motion

    NASA Astrophysics Data System (ADS)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  16. The method of planning the energy consumption for electricity market

    NASA Astrophysics Data System (ADS)

    Russkov, O. V.; Saradgishvili, S. E.

    2017-10-01

    The limitations of existing forecast models are defined. The offered method is based on game theory, probabilities theory and forecasting the energy prices relations. New method is the basis for planning the uneven energy consumption of industrial enterprise. Ecological side of the offered method is disclosed. The program module performed the algorithm of the method is described. Positive method tests at the industrial enterprise are shown. The offered method allows optimizing the difference between planned and factual consumption of energy every hour of a day. The conclusion about applicability of the method for addressing economic and ecological challenges is made.

  17. A distributed control approach for power and energy management in a notional shipboard power system

    NASA Astrophysics Data System (ADS)

    Shen, Qunying

    The main goal of this thesis is to present a power control module (PCON) based approach for power and energy management and to examine its control capability in shipboard power system (SPS). The proposed control scheme is implemented in a notional medium voltage direct current (MVDC) integrated power system (IPS) for electric ship. To realize the control functions such as ship mode selection, generator launch schedule, blackout monitoring, and fault ride-through, a PCON based distributed power and energy management system (PEMS) is developed. The control scheme is proposed as two-layer hierarchical architecture with system level on the top as the supervisory control and zonal level on the bottom as the decentralized control, which is based on the zonal distribution characteristic of the notional MVDC IPS that was proposed as one of the approaches for Next Generation Integrated Power System (NGIPS) by Norbert Doerry. Several types of modules with different functionalities are used to derive the control scheme in detail for the notional MVDC IPS. Those modules include the power generation module (PGM) that controls the function of generators, the power conversion module (PCM) that controls the functions of DC/DC or DC/AC converters, etc. Among them, the power control module (PCON) plays a critical role in the PEMS. It is the core of the control process. PCONs in the PEMS interact with all the other modules, such as power propulsion module (PPM), energy storage module (ESM), load shedding module (LSHED), and human machine interface (HMI) to realize the control algorithm in PEMS. The proposed control scheme is implemented in real time using the real time digital simulator (RTDS) to verify its validity. To achieve this, a system level energy storage module (SESM) and a zonal level energy storage module (ZESM) are developed in RTDS to cooperate with PCONs to realize the control functionalities. In addition, a load shedding module which takes into account the reliability of power supply (in terms of quality of service) is developed. This module can supply uninterruptible power to the mission critical loads. In addition, a multi-agent system (MAS) based framework is proposed to implement the PCON based PEMS through a hardware setup that is composed of MAMBA boards and FPGA interface. Agents are implemented using Java Agent DEvelopment Framework (JADE). Various test scenarios were tested to validate the approach.

  18. Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy

    NASA Astrophysics Data System (ADS)

    Yang, Yu; Dong, Bin; Wen, Zaiwen

    2017-02-01

    In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.

  19. Performance advantages of maximum likelihood methods in PRBS-modulated time-of-flight electron energy loss spectroscopy

    NASA Astrophysics Data System (ADS)

    Yang, Zhongyu

    This thesis describes the design, experimental performance, and theoretical simulation of a novel time-of-flight analyzer that was integrated into a high resolution electron energy loss spectrometer (TOF-HREELS). First we examined the use of an interleaved comb chopper for chopping a continuous electron beam. Both static and dynamic behaviors were simulated theoretically and measured experimentally, with very good agreement. The finite penetration of the field beyond the plane of the chopper leads to non-ideal chopper response, which is characterized in terms of an "energy corruption" effect and a lead or lag in the time at which the beam responds to the chopper potential. Second we considered the recovery of spectra from pseudo-random binary sequence (PRBS) modulated TOF-HREELS data. The effects of the Poisson noise distribution and the non-ideal behavior of the "interleaved comb" chopper were simulated. We showed, for the first time, that maximum likelihood methods can be combined with PRBS modulation to achieve resolution enhancement, while properly accounting for the Poisson noise distribution and artifacts introduced by the chopper. Our results indicate that meV resolution, similar to that of modern high resolution electron energy loss spectrometers, can be achieved with a dramatic performance advantage over conventional, serial detection analyzers. To demonstrate the capabilities of the TOF-HREELS instrument, we made measurements on a highly oriented thin film polytetrafluoroethylene (PTFE) sample. We demonstrated that the TOF-HREELS can achieve a throughput advantage of a factor of 85 compared to the conventional HREELS instrument. Comparisons were made between the experimental results and theoretical simulations. We discuss various factors which affect inversion of PRBS modulated Time of Flight (TOF) data with the Lucy algorithm. Using simulations, we conclude that the convolution assumption was good under the conditions of our experiment. The chopper rise time, Poisson noise, and artifacts of the chopper response are evaluated. Finally, we conclude that the maximum likelihood algorithms are able to gain a multiplex advantage in PRBS modulation, despite the Poisson noise in the detector.

  20. Active module identification in intracellular networks using a memetic algorithm with a new binary decoding scheme.

    PubMed

    Li, Dong; Pan, Zhisong; Hu, Guyu; Zhu, Zexuan; He, Shan

    2017-03-14

    Active modules are connected regions in biological network which show significant changes in expression over particular conditions. The identification of such modules is important since it may reveal the regulatory and signaling mechanisms that associate with a given cellular response. In this paper, we propose a novel active module identification algorithm based on a memetic algorithm. We propose a novel encoding/decoding scheme to ensure the connectedness of the identified active modules. Based on the scheme, we also design and incorporate a local search operator into the memetic algorithm to improve its performance. The effectiveness of proposed algorithm is validated on both small and large protein interaction networks.

  1. The Autism Diagnostic Observation Schedule, Module 4: Revised Algorithm and Standardized Severity Scores

    ERIC Educational Resources Information Center

    Hus, Vanessa; Lord, Catherine

    2014-01-01

    The recently published Autism Diagnostic Observation Schedule, 2nd edition (ADOS-2) includes revised diagnostic algorithms and standardized severity scores for modules used to assess younger children. A revised algorithm and severity scores are not yet available for Module 4, used with verbally fluent adults. The current study revises the Module 4…

  2. The Autism Diagnostic Observation Schedule, Module 4: Revised Algorithm and Standardized Severity Scores

    PubMed Central

    Hus, Vanessa; Lord, Catherine

    2014-01-01

    The Autism Diagnostic Observation Schedule, 2nd Edition includes revised diagnostic algorithms and standardized severity scores for modules used to assess children and adolescents of varying language abilities. Comparable revisions have not yet been applied to the Module 4, used with verbally fluent adults. The current study revises the Module 4 algorithm and calibrates raw overall and domain totals to provide metrics of ASD symptom severity. Sensitivity and specificity of the revised Module 4 algorithm exceeded 80% in the overall sample. Module 4 calibrated severity scores provide quantitative estimates of ASD symptom severity that are relatively independent of participant characteristics. These efforts increase comparability of ADOS scores across modules and should facilitate efforts to increase understanding of adults with ASD. PMID:24590409

  3. A Survey on Next-Generation Mixed Line Rate (MLR) and Energy-Driven Wavelength-Division Multiplexed (WDM) Optical Networks

    NASA Astrophysics Data System (ADS)

    Iyer, Sridhar

    2015-06-01

    With the ever-increasing traffic demands, infrastructure of the current 10 Gbps optical network needs to be enhanced. Further, since the energy crisis is gaining increasing concerns, new research topics need to be devised and technological solutions for energy conservation need to be investigated. In all-optical mixed line rate (MLR) network, feasibility of a lightpath is determined by the physical layer impairment (PLI) accumulation. Contrary to PLI-aware routing and wavelength assignment (PLIA-RWA) algorithm applicable for a 10 Gbps wavelength-division multiplexed (WDM) network, a new Routing, Wavelength, Modulation format assignment (RWMFA) algorithm is required for the MLR optical network. With the rapid growth of energy consumption in Information and Communication Technologies (ICT), recently, lot of attention is being devoted toward "green" ICT solutions. This article presents a review of different RWMFA (PLIA-RWA) algorithms for MLR networks, and surveys the most relevant research activities aimed at minimizing energy consumption in optical networks. In essence, this article presents a comprehensive and timely survey on a growing field of research, as it covers most aspects of MLR and energy-driven optical networks. Hence, the author aims at providing a comprehensive reference for the growing base of researchers who will work on MLR and energy-driven optical networks in the upcoming years. Finally, the article also identifies several open problems for future research.

  4. Study on a low complexity adaptive modulation algorithm in OFDM-ROF system with sub-carrier grouping technology

    NASA Astrophysics Data System (ADS)

    Liu, Chong-xin; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Tian, Qing-hua; Tian, Feng; Wang, Yong-jun; Rao, Lan; Mao, Yaya; Li, Deng-ao

    2018-01-01

    During the last decade, the orthogonal frequency division multiplexing radio-over-fiber (OFDM-ROF) system with adaptive modulation technology is of great interest due to its capability of raising the spectral efficiency dramatically, reducing the effects of fiber link or wireless channel, and improving the communication quality. In this study, according to theoretical analysis of nonlinear distortion and frequency selective fading on the transmitted signal, a low-complexity adaptive modulation algorithm is proposed in combination with sub-carrier grouping technology. This algorithm achieves the optimal performance of the system by calculating the average combined signal-to-noise ratio of each group and dynamically adjusting the origination modulation format according to the preset threshold and user's requirements. At the same time, this algorithm takes the sub-carrier group as the smallest unit in the initial bit allocation and the subsequent bit adjustment. So, the algorithm complexity is only 1 /M (M is the number of sub-carriers in each group) of Fischer algorithm, which is much smaller than many classic adaptive modulation algorithms, such as Hughes-Hartogs algorithm, Chow algorithm, and is in line with the development direction of green and high speed communication. Simulation results show that the performance of OFDM-ROF system with the improved algorithm is much better than those without adaptive modulation, and the BER of the former achieves 10e1 to 10e2 times lower than the latter when SNR values gets larger. We can obtain that this low complexity adaptive modulation algorithm is extremely useful for the OFDM-ROF system.

  5. Symbolic Algebra Development for Higher-Order Electron Propagator Formulation and Implementation.

    PubMed

    Tamayo-Mendoza, Teresa; Flores-Moreno, Roberto

    2014-06-10

    Through the use of symbolic algebra, implemented in a program, the algebraic expression of the elements of the self-energy matrix for the electron propagator to different orders were obtained. In addition, a module for the software package Lowdin was automatically generated. Second- and third-order electron propagator results have been calculated to test the correct operation of the program. It was found that the Fortran 90 modules obtained automatically with our algorithm succeeded in calculating ionization energies with the second- and third-order electron propagator in the diagonal approximation. The strategy for the development of this symbolic algebra program is described in detail. This represents a solid starting point for the automatic derivation and implementation of higher-order electron propagator methods.

  6. R&D of the CEPC scintillator-tungsten ECAL

    NASA Astrophysics Data System (ADS)

    Dong, M. Y.

    2018-03-01

    The circular electron and positron collider (CEPC) was proposed as a future Higgs factory. To meet the physics requirements, a particle flow algorithm-oriented calorimeter system with high energy resolution and precise reconstruction is considered. A sampling calorimeter with scintillator-tungsten sandwich structure is selected as one of the electromagnetic calorimeter (ECAL) options due to its good performance and relatively low cost. We present the design, the test and the optimization of the scintillator module read out by silicon photomultiplier (SiPM), including the design and the development of the electronics. To estimate the performance of the scintillator and SiPM module for particles with different energy, the beam test of a mini detector prototype without tungsten shower material was performed at the E3 beams in Institute of High Energy Physics (IHEP). The results are consistent with the expectation. These studies provide a reference and promote the development of particle flow electromagnetic calorimeter for the CEPC.

  7. Current Status of Japan's Activity for GPM/DPR and Global Rainfall Map algorithm development

    NASA Astrophysics Data System (ADS)

    Kachi, M.; Kubota, T.; Yoshida, N.; Kida, S.; Oki, R.; Iguchi, T.; Nakamura, K.

    2012-04-01

    The Global Precipitation Measurement (GPM) mission is composed of two categories of satellites; 1) a Tropical Rainfall Measuring Mission (TRMM)-like non-sun-synchronous orbit satellite (GPM Core Observatory); and 2) constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory carries the Dual-frequency Precipitation Radar (DPR), which is being developed by the Japan Aerospace Exploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT), and microwave radiometer provided by the National Aeronautics and Space Administration (NASA). GPM Core Observatory will be launched in February 2014, and development of algorithms is underway. DPR Level 1 algorithm, which provides DPR L1B product including received power, will be developed by the JAXA. The first version was submitted in March 2011. Development of the second version of DPR L1B algorithm (Version 2) will complete in March 2012. Version 2 algorithm includes all basic functions, preliminary database, HDF5 I/F, and minimum error handling. Pre-launch code will be developed by the end of October 2012. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The first version of GPM/DPR Level-2 Algorithm Theoretical Basis Document was completed on November 2010. The second version, "Baseline code", was completed in January 2012. Baseline code includes main module, and eight basic sub-modules (Preparation module, Vertical Profile module, Classification module, SRT module, DSD module, Solver module, Input module, and Output module.) The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. It is important to develop algorithm applicable to both TRMM/PR and KuPR in order to produce long-term continuous data set. Pre-launch code will be developed by autumn 2012. Global Rainfall Map algorithm has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project between 2002 and 2007, and near-real-time version operating at JAXA since 2007. "Baseline code" used current operational GSMaP code (V5.222,) and development completed in January 2012. Pre-launch code will be developed by autumn 2012, including update of database for rain type classification and rain/no-rain classification, and introduction of rain-gauge correction.

  8. Asymptotic Cramer-Rao bounds for Morlet wavelet filter bank transforms of FM signals

    NASA Astrophysics Data System (ADS)

    Scheper, Richard

    2002-03-01

    Wavelet filter banks are potentially useful tools for analyzing and extracting information from frequency modulated (FM) signals in noise. Chief among the advantages of such filter banks is the tendency of wavelet transforms to concentrate signal energy while simultaneously dispersing noise energy over the time-frequency plane, thus raising the effective signal to noise ratio of filtered signals. Over the past decade, much effort has gone into devising new algorithms to extract the relevant information from transformed signals while identifying and discarding the transformed noise. Therefore, estimates of the ultimate performance bounds on such algorithms would serve as valuable benchmarks in the process of choosing optimal algorithms for given signal classes. Discussed here is the specific case of FM signals analyzed by Morlet wavelet filter banks. By making use of the stationary phase approximation of the Morlet transform, and assuming that the measured signals are well resolved digitally, the asymptotic form of the Fisher Information Matrix is derived. From this, Cramer-Rao bounds are analytically derived for simple cases.

  9. Real time target allocation in cooperative unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kudleppanavar, Ganesh

    The prolific development of Unmanned Aerial Vehicles (UAV's) in recent years has the potential to provide tremendous advantages in military, commercial and law enforcement applications. While safety and performance take precedence in the development lifecycle, autonomous operations and, in particular, cooperative missions have the ability to significantly enhance the usability of these vehicles. The success of cooperative missions relies on the optimal allocation of targets while taking into consideration the resource limitation of each vehicle. The task allocation process can be centralized or decentralized. This effort presents the development of a real time target allocation algorithm that considers available stored energy in each vehicle while minimizing the communication between each UAV. The algorithm utilizes a nearest neighbor search algorithm to locate new targets with respect to existing targets. Simulations show that this novel algorithm compares favorably to the mixed integer linear programming method, which is computationally more expensive. The implementation of this algorithm on Arduino and Xbee wireless modules shows the capability of the algorithm to execute efficiently on hardware with minimum computation complexity.

  10. An evaluation to design high performance pinhole array detector module for four head SPECT: a simulation study

    NASA Astrophysics Data System (ADS)

    Rahman, Tasneem; Tahtali, Murat; Pickering, Mark R.

    2014-09-01

    The purpose of this study is to derive optimized parameters for a detector module employing an off-the-shelf X-ray camera and a pinhole array collimator applicable for a range of different SPECT systems. Monte Carlo simulations using the Geant4 application for tomographic emission (GATE) were performed to estimate the performance of the pinhole array collimators and were compared to that of low energy high resolution (LEHR) parallel-hole collimator in a four head SPECT system. A detector module was simulated to have 48 mm by 48 mm active area along with 1mm, 1.6mm and 2 mm pinhole aperture sizes at 0.48 mm pitch on a tungsten plate. Perpendicular lead septa were employed to verify overlapping and non-overlapping projections against a proper acceptance angle without lead septa. A uniform shape cylindrical water phantom was used to evaluate the performance of the proposed four head SPECT system of the pinhole array detector module. For each head, 100 pinhole configurations were evaluated based on sensitivity and detection efficiency for 140 keV γ-rays, and compared to LEHR parallel-hole collimator. SPECT images were reconstructed based on filtered back projection (FBP) algorithm where neither scatter nor attenuation corrections were performed. A better reconstruction algorithm development for this specific system is in progress. Nevertheless, activity distribution was well visualized using the backprojection algorithm. In this study, we have evaluated several quantitative and comparative analyses for a pinhole array imaging system providing high detection efficiency and better system sensitivity over a large FOV, comparing to the conventional four head SPECT system. The proposed detector module is expected to provide improved performance in various SPECT imaging.

  11. Cellular telephone-based radiation sensor and wide-area detection network

    DOEpatents

    Craig, William W [Pittsburg, CA; Labov, Simon E [Berkeley, CA

    2006-12-12

    A network of radiation detection instruments, each having a small solid state radiation sensor module integrated into a cellular phone for providing radiation detection data and analysis directly to a user. The sensor module includes a solid-state crystal bonded to an ASIC readout providing a low cost, low power, light weight compact instrument to detect and measure radiation energies in the local ambient radiation field. In particular, the photon energy, time of event, and location of the detection instrument at the time of detection is recorded for real time transmission to a central data collection/analysis system. The collected data from the entire network of radiation detection instruments are combined by intelligent correlation/analysis algorithms which map the background radiation and detect, identify and track radiation anomalies in the region.

  12. Cellular telephone-based radiation detection instrument

    DOEpatents

    Craig, William W [Pittsburg, CA; Labov, Simon E [Berkeley, CA

    2011-06-14

    A network of radiation detection instruments, each having a small solid state radiation sensor module integrated into a cellular phone for providing radiation detection data and analysis directly to a user. The sensor module includes a solid-state crystal bonded to an ASIC readout providing a low cost, low power, light weight compact instrument to detect and measure radiation energies in the local ambient radiation field. In particular, the photon energy, time of event, and location of the detection instrument at the time of detection is recorded for real time transmission to a central data collection/analysis system. The collected data from the entire network of radiation detection instruments are combined by intelligent correlation/analysis algorithms which map the background radiation and detect, identify and track radiation anomalies in the region.

  13. Cellular telephone-based wide-area radiation detection network

    DOEpatents

    Craig, William W [Pittsburg, CA; Labov, Simon E [Berkeley, CA

    2009-06-09

    A network of radiation detection instruments, each having a small solid state radiation sensor module integrated into a cellular phone for providing radiation detection data and analysis directly to a user. The sensor module includes a solid-state crystal bonded to an ASIC readout providing a low cost, low power, light weight compact instrument to detect and measure radiation energies in the local ambient radiation field. In particular, the photon energy, time of event, and location of the detection instrument at the time of detection is recorded for real time transmission to a central data collection/analysis system. The collected data from the entire network of radiation detection instruments are combined by intelligent correlation/analysis algorithms which map the background radiation and detect, identify and track radiation anomalies in the region.

  14. Modelling of the Installed Capacity of Landfill Power Stations

    NASA Astrophysics Data System (ADS)

    Blumberga, D.; Kuplais, Ģ.; Veidenbergs, I.; Dāce, E.; Gušča, J.

    2009-01-01

    More and more landfills are being developed, in which biogas is produced and accumulated, which can be used for electricity production. Currently, due to technological reasons, electricity generation from biogas has a very low level of efficiency. In order to develop this type of energy production, it is important to find answers to various engineering, economic and ecological issues. The paper outlines the results obtained by creating a model for the calculations of electricity production in landfill power stations and by testing it in the municipal solid waste landfill "Daibe". The algorithm of the mathematical model for the operation of a biogas power station consists of four main modules: • initial data module, • engineering calculation module, • tariff calculation module, and • climate calculation module. As a result, the optimum capacity of the power station in the landfill "Daibe" is determined, as well as the analysis of the landfill's economic data and cost-effectiveness is conducted.

  15. Tests of the module array of the ECAL0 electromagnetic calorimeter for the COMPASS experiment with the electron beam at ELSA

    NASA Astrophysics Data System (ADS)

    Anfimov, N.; Anosov, V.; Barth, J.; Chalyshev, V.; Chirikov-Zorin, I.; Dziewiecki, M.; Elsner, D.; Frolov, V.; Frommberger, F.; Guskov, A.; Hillert, W.; Klein, F.; Krumshteyn, Z.; Kurjata, R.; Marzec, J.; Nagaytsev, A.; Olchevski, A.; Orlov, I.; Rezinko, T.; Rybnikov, A.; Rychter, A.; Selyunin, A.; Zaremba, K.; Ziembicki, M.

    2015-07-01

    The array of 3 × 3 modules of the electromagnetic calorimeter ECAL0 of the COMPASS experiment at CERN has been tested with an electron beam of the ELSA (Germany) facility. The dependence of the response and the energy resolution of the calorimeter from the angle of incidence of the electron beam has been studied. A good agreement between the experimental data and the results of Monte Carlo simulation has been obtained. It will significantly expand the use of simulation to optimize event reconstruction algorithms.

  16. Investigation of FPGA-Based Real-Time Adaptive Digital Pulse Shaping for High-Count-Rate Applications

    NASA Astrophysics Data System (ADS)

    Saxena, Shefali; Hawari, Ayman I.

    2017-07-01

    Digital signal processing techniques have been widely used in radiation spectrometry to provide improved stability and performance with compact physical size over the traditional analog signal processing. In this paper, field-programmable gate array (FPGA)-based adaptive digital pulse shaping techniques are investigated for real-time signal processing. National Instruments (NI) NI 5761 14-bit, 250-MS/s adaptor module is used for digitizing high-purity germanium (HPGe) detector's preamplifier pulses. Digital pulse processing algorithms are implemented on the NI PXIe-7975R reconfigurable FPGA (Kintex-7) using the LabVIEW FPGA module. Based on the time separation between successive input pulses, the adaptive shaping algorithm selects the optimum shaping parameters (rise time and flattop time of trapezoid-shaping filter) for each incoming signal. A digital Sallen-Key low-pass filter is implemented to enhance signal-to-noise ratio and reduce baseline drifting in trapezoid shaping. A recursive trapezoid-shaping filter algorithm is employed for pole-zero compensation of exponentially decayed (with two-decay constants) preamplifier pulses of an HPGe detector. It allows extraction of pulse height information at the beginning of each pulse, thereby reducing the pulse pileup and increasing throughput. The algorithms for RC-CR2 timing filter, baseline restoration, pile-up rejection, and pulse height determination are digitally implemented for radiation spectroscopy. Traditionally, at high-count-rate conditions, a shorter shaping time is preferred to achieve high throughput, which deteriorates energy resolution. In this paper, experimental results are presented for varying count-rate and pulse shaping conditions. Using adaptive shaping, increased throughput is accepted while preserving the energy resolution observed using the longer shaping times.

  17. Sun-Relative Pointing for Dual-Axis Solar Trackers Employing Azimuth and Elevation Rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riley, Daniel; Hansen, Clifford W.

    Dual axis trackers employing azimuth and elevation rotations are common in the field of photovoltaic (PV) energy generation. Accurate sun-tracking algorithms are widely available. However, a steering algorithm has not been available to accurately point the tracker away from the sun such that a vector projection of the sun beam onto the tracker face falls along a desired path relative to the tracker face. We have developed an algorithm which produces the appropriate azimuth and elevation angles for a dual axis tracker when given the sun position, desired angle of incidence, and the desired projection of the sun beam ontomore » the tracker face. Development of this algorithm was inspired by the need to accurately steer a tracker to desired sun-relative positions in order to better characterize the electro-optical properties of PV and CPV modules.« less

  18. Adaptive software-defined coded modulation for ultra-high-speed optical transport

    NASA Astrophysics Data System (ADS)

    Djordjevic, Ivan B.; Zhang, Yequn

    2013-10-01

    In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.

  19. Optimal spiral phase modulation in Gerchberg-Saxton algorithm for wavefront reconstruction and correction

    NASA Astrophysics Data System (ADS)

    Baránek, M.; Běhal, J.; Bouchal, Z.

    2018-01-01

    In the phase retrieval applications, the Gerchberg-Saxton (GS) algorithm is widely used for the simplicity of implementation. This iterative process can advantageously be deployed in the combination with a spatial light modulator (SLM) enabling simultaneous correction of optical aberrations. As recently demonstrated, the accuracy and efficiency of the aberration correction using the GS algorithm can be significantly enhanced by a vortex image spot used as the target intensity pattern in the iterative process. Here we present an optimization of the spiral phase modulation incorporated into the GS algorithm.

  20. Development of frequency modulation reflectometer for KSTAR tokamak: Data analysis based on Gaussian derivative waveleta)

    NASA Astrophysics Data System (ADS)

    Seo, Seong-Heon; Lee, K. D.

    2012-10-01

    A frequency modulation reflectometer has been developed to measure the density profile of the KSTAR tokamak. It has two channels operating in X-mode in the frequency range of Q band (33-50 GHz) and V band (50-75 GHz). The full band is swept in 20 μs. The mixer output is directly digitized at the sampling rate of 100 MSamples/s. A new phase detection algorithm is developed to analyze both amplitude and frequency modulated signal. The algorithm is benchmarked for a synthesized amplitude modulation-frequency modulation signal. This new algorithm is applied to the data analysis of KSTAR reflectometer.

  1. PandaX-III neutrinoless double beta decay experiment

    NASA Astrophysics Data System (ADS)

    Wang, Shaobo; PandaX-III Collaboration

    2017-09-01

    The PandaX-III experiment uses high pressure Time Projection Chambers (TPCs) to search for neutrinoless double-beta decay of Xe-136 with high energy resolution and sensitivity at the China Jin-Ping underground Laboratory II (CJPL-II). Fine-pitch Microbulk Micromegas will be used for charge amplification and readout in order to reconstruct both the energy and track of the neutrinoless double-beta decay event. In the first phase of the experiment, the detector, which contains 200 kg of 90% Xe-136 enriched gas operated at 10 bar, will be immersed in a large water tank to ensure 5 m of water shielding. For the second phase, a ton-scale experiment with multiple TPCs will be constructed to improve the detection probability and sensitivity. A 20-kg scale prototype TPC with 7 Micromegas modules has been built to optimize the design of Micromegas readout module, study the energy calibration of TPC and develop algorithm of 3D track reconstruction.

  2. The Autism Diagnostic Observation Schedule, Module 4: Application of the Revised Algorithms in an Independent, Well-Defined, Dutch Sample (n = 93).

    PubMed

    de Bildt, Annelies; Sytema, Sjoerd; Meffert, Harma; Bastiaansen, Jojanneke A C J

    2016-01-01

    This study examined the discriminative ability of the revised Autism Diagnostic Observation Schedule module 4 algorithm (Hus and Lord in J Autism Dev Disord 44(8):1996-2012, 2014) in 93 Dutch males with Autism Spectrum Disorder (ASD), schizophrenia, psychopathy or controls. Discriminative ability of the revised algorithm ASD cut-off resembled the original algorithm ASD cut-off: highly specific for psychopathy and controls, lower sensitivity than Hus and Lord (2014; i.e. ASD .61, AD .53). The revised algorithm AD cut-off improved sensitivity over the original algorithm. Discriminating ASD from schizophrenia was still challenging, but the better-balanced sensitivity (.53) and specificity (.78) of the revised algorithm AD cut-off may aide clinicians' differential diagnosis. Findings support using the revised algorithm, being conceptually conform the other modules, thus improving comparability across the lifespan.

  3. Investigation of periodically driven systems by x-ray absorption spectroscopy using asynchronous data collection mode

    NASA Astrophysics Data System (ADS)

    Singh, H.; Donetsky, D.; Liu, J.; Attenkofer, K.; Cheng, B.; Trelewicz, J. R.; Lubomirsky, I.; Stavitski, E.; Frenkel, A. I.

    2018-04-01

    We report the development, testing, and demonstration of a setup for modulation excitation spectroscopy experiments at the Inner Shell Spectroscopy beamline of National Synchrotron Light Source - II. A computer algorithm and dedicated software were developed for asynchronous data processing and analysis. We demonstrate the reconstruction of X-ray absorption spectra for different time points within the modulation pulse using a model system. This setup and the software are intended for a broad range of functional materials which exhibit structural and/or electronic responses to the external stimulation, such as catalysts, energy and battery materials, and electromechanical devices.

  4. Evaluation of the PV energy production after 12-years of operating

    NASA Astrophysics Data System (ADS)

    Bouchakour, Salim; Arab, Amar Hadj; Abdeladim, Kamel; Boulahchiche, Saliha; Amrouche, Said Ould; Razagui, Abdelhak

    2018-05-01

    This paper presents a simple way to approximately evaluate the photovoltaic (PV) array performance degradation, the studied PV arrays are connected to the local electric grid at the Centre de Developpement des Energies Renouvelables (CDER) in Algiers, Algeria, since June 2004. The used PV module model takes in consideration the module temperature and the effective solar radiance, the electrical characteristics provided by the manufacturer data sheet and the evaluation of the performance coefficient. For the dynamic behavior we use the Linear Reoriented Coordinates Method (LRCM) to estimate the maximum power point (MPP). The performance coefficient is evaluated on the one hand under STC conditions to estimate the dc energy according to the manufacturer data. On the other hand, under real conditions using both the monitored data and the LM optimization algorithm, allowing a good degree of accuracy of estimated dc energy. The application of the developed modeling procedure to the analysis of the monitored data is expected to improve understanding and assessment of the PV performance degradation of the PV arrays after 12 years of operation.

  5. Voltage equalization of an ultracapacitor module by cell grouping using number partitioning algorithm

    NASA Astrophysics Data System (ADS)

    Oyarbide, E.; Bernal, C.; Molina, P.; Jiménez, L. A.; Gálvez, R.; Martínez, A.

    2016-01-01

    Ultracapacitors are low voltage devices and therefore, for practical applications, they need to be used in modules of series-connected cells. Because of the inherent manufacturing tolerance of the capacitance parameter of each cell, and as the maximum voltage value cannot be exceeded, the module requires inter-cell voltage equalization. If the intended application suffers repeated fast charging/discharging cycles, active equalization circuits must be rated to full power, and thus the module becomes expensive. Previous work shows that a series connection of several sets of paralleled ultracapacitors minimizes the dispersion of equivalent capacitance values, and also the voltage differences between capacitors. Thus the overall life expectancy is improved. This paper proposes a method to distribute ultracapacitors with a number partitioning-based strategy to reduce the dispersion between equivalent submodule capacitances. Thereafter, the total amount of stored energy and/or the life expectancy of the device can be considerably improved.

  6. Model documentation: Natural gas transmission and distribution model of the National Energy Modeling System. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-02-17

    The Natural Gas Transmission and Distribution Model (NGTDM) is the component of the National Energy Modeling System (NEMS) that is used to represent the domestic natural gas transmission and distribution system. NEMS was developed in the Office of integrated Analysis and Forecasting of the Energy information Administration (EIA). NEMS is the third in a series of computer-based, midterm energy modeling systems used since 1974 by the EIA and its predecessor, the Federal Energy Administration, to analyze domestic energy-economy markets and develop projections. The NGTDM is the model within the NEMS that represents the transmission, distribution, and pricing of natural gas.more » The model also includes representations of the end-use demand for natural gas, the production of domestic natural gas, and the availability of natural gas traded on the international market based on information received from other NEMS models. The NGTDM determines the flow of natural gas in an aggregate, domestic pipeline network, connecting domestic and foreign supply regions with 12 demand regions. The methodology employed allows the analysis of impacts of regional capacity constraints in the interstate natural gas pipeline network and the identification of pipeline capacity expansion requirements. There is an explicit representation of core and noncore markets for natural gas transmission and distribution services, and the key components of pipeline tariffs are represented in a pricing algorithm. Natural gas pricing and flow patterns are derived by obtaining a market equilibrium across the three main elements of the natural gas market: the supply element, the demand element, and the transmission and distribution network that links them. The NGTDM consists of four modules: the Annual Flow Module, the Capacity F-expansion Module, the Pipeline Tariff Module, and the Distributor Tariff Module. A model abstract is provided in Appendix A.« less

  7. Novel Modulation Method for Multidirectional Matrix Converter

    PubMed Central

    Misron, Norhisam; Aris, Ishak Bin; Yamada, Hiroaki

    2014-01-01

    This study presents a new modulation method for multidirectional matrix converter (MDMC), based on the direct duty ratio pulse width modulation (DDPWM). In this study, a new structure of MDMC has been proposed to control the power flow direction through the stand-alone battery based system and hybrid vehicle. The modulation method acts based on the average voltage over one switching period concept. Therefore, in order to determine the duty ratio for each switch, the instantaneous input voltages are captured and compared with triangular waveform continuously. By selecting the proper switching pattern and changing the slope of the carriers, the sinusoidal input current can be synthesized with high power factor and desired output voltage. The proposed system increases the discharging time of the battery by injecting the power to the system from the generator and battery at the same time. Thus, it makes the battery life longer and saves more energy. This paper also derived necessary equation for proposed modulation method as well as detail of analysis and modulation algorithm. The theoretical and modulation concepts presented have been verified in MATLAB simulation. PMID:25298969

  8. SAM Photovoltaic Model Technical Reference 2016 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilman, Paul; DiOrio, Nicholas A; Freeman, Janine M

    This manual describes the photovoltaic performance model in the System Advisor Model (SAM) software, Version 2016.3.14 Revision 4 (SSC Version 160). It is an update to the 2015 edition of the manual, which describes the photovoltaic model in SAM 2015.1.30 (SSC 41). This new edition includes corrections of errors in the 2015 edition and descriptions of new features introduced in SAM 2016.3.14, including: 3D shade calculator Battery storage model DC power optimizer loss inputs Snow loss model Plane-of-array irradiance input from weather file option Support for sub-hourly simulations Self-shading works with all four subarrays, and uses same algorithm for fixedmore » arrays and one-axis tracking Linear self-shading algorithm for thin-film modules Loss percentages replace derate factors. The photovoltaic performance model is one of the modules in the SAM Simulation Core (SSC), which is part of both SAM and the SAM SDK. SAM is a user-friedly desktop application for analysis of renewable energy projects. The SAM SDK (Software Development Kit) is for developers writing their own renewable energy analysis software based on SSC. This manual is written for users of both SAM and the SAM SDK wanting to learn more about the details of SAM's photovoltaic model.« less

  9. Boosted object hardware trigger development and testing for the Phase I upgrade of the ATLAS Experiment

    NASA Astrophysics Data System (ADS)

    Stark, Giordon; Atlas Collaboration

    2015-04-01

    The Global Feature Extraction (gFEX) module is a Level 1 jet trigger system planned for installation in ATLAS during the Phase 1 upgrade in 2018. The gFEX selects large-radius jets for capturing Lorentz-boosted objects by means of wide-area jet algorithms refined by subjet information. The architecture of the gFEX permits event-by-event local pile-up suppression for these jets using the same subtraction techniques developed for offline analyses. The gFEX architecture is also suitable for other global event algorithms such as missing transverse energy (MET), centrality for heavy ion collisions, and ``jets without jets.'' The gFEX will use 4 processor FPGAs to perform calculations on the incoming data and a Hybrid APU-FPGA for slow control of the module. The gFEX is unique in both design and implementation and substantially enhance the selectivity of the L1 trigger and increases sensitivity to key physics channels.

  10. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD without ID: A Multi-Site Study

    ERIC Educational Resources Information Center

    Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L.; Yerys, Benjamin E.; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth

    2015-01-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised…

  11. An iterative network partition algorithm for accurate identification of dense network modules

    PubMed Central

    Sun, Siqi; Dong, Xinran; Fu, Yao; Tian, Weidong

    2012-01-01

    A key step in network analysis is to partition a complex network into dense modules. Currently, modularity is one of the most popular benefit functions used to partition network modules. However, recent studies suggested that it has an inherent limitation in detecting dense network modules. In this study, we observed that despite the limitation, modularity has the advantage of preserving the primary network structure of the undetected modules. Thus, we have developed a simple iterative Network Partition (iNP) algorithm to partition a network. The iNP algorithm provides a general framework in which any modularity-based algorithm can be implemented in the network partition step. Here, we tested iNP with three modularity-based algorithms: multi-step greedy (MSG), spectral clustering and Qcut. Compared with the original three methods, iNP achieved a significant improvement in the quality of network partition in a benchmark study with simulated networks, identified more modules with significantly better enrichment of functionally related genes in both yeast protein complex network and breast cancer gene co-expression network, and discovered more cancer-specific modules in the cancer gene co-expression network. As such, iNP should have a broad application as a general method to assist in the analysis of biological networks. PMID:22121225

  12. Expanding Metabolic Engineering Algorithms Using Feasible Space and Shadow Price Constraint Modules

    PubMed Central

    Tervo, Christopher J.; Reed, Jennifer L.

    2014-01-01

    While numerous computational methods have been developed that use genome-scale models to propose mutants for the purpose of metabolic engineering, they generally compare mutants based on a single criteria (e.g., production rate at a mutant’s maximum growth rate). As such, these approaches remain limited in their ability to include multiple complex engineering constraints. To address this shortcoming, we have developed feasible space and shadow price constraint (FaceCon and ShadowCon) modules that can be added to existing mixed integer linear adaptive evolution metabolic engineering algorithms, such as OptKnock and OptORF. These modules allow strain designs to be identified amongst a set of multiple metabolic engineering algorithm solutions that are capable of high chemical production while also satisfying additional design criteria. We describe the various module implementations and their potential applications to the field of metabolic engineering. We then incorporated these modules into the OptORF metabolic engineering algorithm. Using an Escherichia coli genome-scale model (iJO1366), we generated different strain designs for the anaerobic production of ethanol from glucose, thus demonstrating the tractability and potential utility of these modules in metabolic engineering algorithms. PMID:25478320

  13. Optimization in Radiation Therapy: Applications in Brachytherapy and Intensity Modulated Radiation Therapy

    NASA Astrophysics Data System (ADS)

    McGeachy, Philip David

    Over 50% of cancer patients require radiation therapy (RT). RT is an optimization problem requiring maximization of the radiation damage to the tumor while minimizing the harm to the healthy tissues. This dissertation focuses on two main RT optimization problems: 1) brachytherapy and 2) intensity modulated radiation therapy (IMRT). The brachytherapy research involved solving a non-convex optimization problem by creating an open-source genetic algorithm optimizer to determine the optimal radioactive seed distribution for a given set of patient volumes and constraints, both dosimetric- and implant-based. The optimizer was tested for a set of 45 prostate brachytherapy patients. While all solutions met the clinical standards, they also benchmarked favorably with those generated by a standard commercial solver. Compared to its compatriot, the salient features of the generated solutions were: slightly reduced prostate coverage, lower dose to the urethra and rectum, and a smaller number of needles required for an implant. Historically, IMRT requires modulation of fluence while keeping the photon beam energy fixed. The IMRT-related investigation in this thesis aimed at broadening the solution space by varying photon energy. The problem therefore involved simultaneous optimization of photon beamlet energy and fluence, denoted by XMRT. Formulating the problem as convex, linear programming was applied to obtain solutions for optimal energy-dependent fluences, while achieving all clinical objectives and constraints imposed. Dosimetric advantages of XMRT over single-energy IMRT in the improved sparing of organs at risk (OARs) was demonstrated in simplified phantom studies. The XMRT algorithm was improved to include clinical dose-volume constraints and clinical studies for prostate and head and neck cancer patients were investigated. Compared to IMRT, XMRT provided improved dosimetric benefit in the prostate case, particularly within intermediate- to low-dose regions (≤ 40 Gy) for OARs. For head and neck cases, XMRT solutions showed no significant disadvantage or advantage over IMRT. The deliverability concerns for the fluence maps generated from XMRT were addressed by incorporating smoothing constraints during the optimization and through successful generation of treatment machine files. Further research is needed to explore the full potential of the XMRT approach to RT.

  14. SU-E-T-268: Differences in Treatment Plan Quality and Delivery Between Two Commercial Treatment Planning Systems for Volumetric Arc-Based Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S; Zhang, H; Zhang, B

    2015-06-15

    Purpose: To clinically evaluate the differences in volumetric modulated arc therapy (VMAT) treatment plan and delivery between two commercial treatment planning systems. Methods: Two commercial VMAT treatment planning systems with different VMAT optimization algorithms and delivery approaches were evaluated. This study included 16 clinical VMAT plans performed with the first system: 2 spine, 4 head and neck (HN), 2 brain, 4 pancreas, and 4 pelvis plans. These 16 plans were then re-optimized with the same number of arcs using the second treatment planning system. Planning goals were invariant between the two systems. Gantry speed, dose rate modulation, MLC modulation, planmore » quality, number of monitor units (MUs), VMAT quality assurance (QA) results, and treatment delivery time were compared between the 2 systems. VMAT QA results were performed using Mapcheck2 and analyzed with gamma analysis (3mm/3% and 2mm/2%). Results: Similar plan quality was achieved with each VMAT optimization algorithm, and the difference in delivery time was minimal. Algorithm 1 achieved planning goals by highly modulating the MLC (total distance traveled by leaves (TL) = 193 cm average over control points per plan), while maintaining a relatively constant dose rate (dose-rate change <100 MU/min). Algorithm 2 involved less MLC modulation (TL = 143 cm per plan), but greater dose-rate modulation (range = 0-600 MU/min). The average number of MUs was 20% less for algorithm 2 (ratio of MUs for algorithms 2 and 1 ranged from 0.5-1). VMAT QA results were similar for all disease sites except HN plans. For HN plans, the average gamma passing rates were 88.5% (2mm/2%) and 96.9% (3mm/3%) for algorithm 1 and 97.9% (2mm/2%) and 99.6% (3mm/3%) for algorithm 2. Conclusion: Both VMAT optimization algorithms achieved comparable plan quality; however, fewer MUs were needed and QA results were more robust for Algorithm 2, which more highly modulated dose rate.« less

  15. MLP based LOGSIG transfer function for solar generation monitoring

    NASA Astrophysics Data System (ADS)

    Hashim, Fakroul Ridzuan; Din, Muhammad Faiz Md; Ahmad, Shahril; Arif, Farah Khairunnisa; Rizman, Zairi Ismael

    2018-02-01

    Solar panel is one of the renewable energy that can reduce the environmental pollution and have a wide potential of application. The exact solar prediction model will give a big impact on the management of solar power plants and the design of solar energy systems. This paper attempts to use Multilayer Perceptron (MLP) neural network based transfer function. The MLP network can be used to calculate the temperature module (TM) in Malaysia. This can be done by simulating the collected data of four weather variables which are the ambient temperature (TA), local wind speed (VW), solar radiation flux (GT) and the relative humidity (RH) as the input into the neural network. The transfer function will be applied to the 14 types of training. Finally, an equation from the best training algorithm will be deduced to calculate the temperature module based on the input of weather variables in Malaysia.

  16. Solar-cell interconnect design for terrestrial photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.

    1984-01-01

    Useful solar cell interconnect reliability design and life prediction algorithms are presented, together with experimental data indicating that the classical strain cycle (fatigue) curve for the interconnect material does not account for the statistical scatter that is required in reliability predictions. This shortcoming is presently addressed by fitting a functional form to experimental cumulative interconnect failure rate data, which thereby yields statistical fatigue curves enabling not only the prediction of cumulative interconnect failures during the design life of an array field, but also the quantitative interpretation of data from accelerated thermal cycling tests. Optimal interconnect cost reliability design algorithms are also derived which may allow the minimization of energy cost over the design life of the array field.

  17. Solar-cell interconnect design for terrestrial photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.

    1984-11-01

    Useful solar cell interconnect reliability design and life prediction algorithms are presented, together with experimental data indicating that the classical strain cycle (fatigue) curve for the interconnect material does not account for the statistical scatter that is required in reliability predictions. This shortcoming is presently addressed by fitting a functional form to experimental cumulative interconnect failure rate data, which thereby yields statistical fatigue curves enabling not only the prediction of cumulative interconnect failures during the design life of an array field, but also the quantitative interpretation of data from accelerated thermal cycling tests. Optimal interconnect cost reliability design algorithms are also derived which may allow the minimization of energy cost over the design life of the array field.

  18. Image reconstruction through thin scattering media by simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Zhang, Xicheng; Zhu, Jianhua

    2018-07-01

    An idea for reconstructing the image of an object behind thin scattering media is proposed by phase modulation. The optimized phase mask is achieved by modulating the scattered light using simulated annealing algorithm. The correlation coefficient is exploited as a fitness function to evaluate the quality of reconstructed image. The reconstructed images optimized from simulated annealing algorithm and genetic algorithm are compared in detail. The experimental results show that our proposed method has better definition and higher speed than genetic algorithm.

  19. Model documentation report: Commercial Sector Demand Module of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-01-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components. The NEMS Commercial Sector Demand Module is a simulation tool based upon economic and engineering relationships that models commercial sector energy demands at the nine Census Division level of detail for eleven distinct categories of commercial buildings. Commercial equipment selections are performed for the major fuels of electricity, natural gas,more » and distillate fuel, for the major services of space heating, space cooling, water heating, ventilation, cooking, refrigeration, and lighting. The algorithm also models demand for the minor fuels of residual oil, liquefied petroleum gas, steam coal, motor gasoline, and kerosene, the renewable fuel sources of wood and municipal solid waste, and the minor services of office equipment. Section 2 of this report discusses the purpose of the model, detailing its objectives, primary input and output quantities, and the relationship of the Commercial Module to the other modules of the NEMS system. Section 3 of the report describes the rationale behind the model design, providing insights into further assumptions utilized in the model development process to this point. Section 3 also reviews alternative commercial sector modeling methodologies drawn from existing literature, providing a comparison to the chosen approach. Section 4 details the model structure, using graphics and text to illustrate model flows and key computations.« less

  20. Investigation of television transmission using adaptive delta modulation principles

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1976-01-01

    The results are presented of a study on the use of the delta modulator as a digital encoder of television signals. The computer simulation of different delta modulators was studied in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results were analyzed and then implemented in hardware to study its ability to encode real time motion pictures from an NTSC format television camera. The effects of channel errors on the delta modulated video signal were tested along with several error correction algorithms via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. Delta modulators were investigated which could achieve significant bandwidth reduction without regard to complexity or speed. The first scheme investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved the computer simulation of two dimensional delta modulator algorithms.

  1. An Iterative Time Windowed Signature Algorithm for Time Dependent Transcription Module Discovery

    PubMed Central

    Meng, Jia; Gao, Shou-Jiang; Huang, Yufei

    2010-01-01

    An algorithm for the discovery of time varying modules using genome-wide expression data is present here. When applied to large-scale time serious data, our method is designed to discover not only the transcription modules but also their timing information, which is rarely annotated by the existing approaches. Rather than assuming commonly defined time constant transcription modules, a module is depicted as a set of genes that are co-regulated during a specific period of time, i.e., a time dependent transcription module (TDTM). A rigorous mathematical definition of TDTM is provided, which is serve as an objective function for retrieving modules. Based on the definition, an effective signature algorithm is proposed that iteratively searches the transcription modules from the time series data. The proposed method was tested on the simulated systems and applied to the human time series microarray data during Kaposi's sarcoma-associated herpesvirus (KSHV) infection. The result has been verified by Expression Analysis Systematic Explorer. PMID:21552463

  2. Evaluation of Automatically Assigned Job-Specific Interview Modules.

    PubMed

    Friesen, Melissa C; Lan, Qing; Ge, Calvin; Locke, Sarah J; Hosgood, Dean; Fritschi, Lin; Sadkowsky, Troy; Chen, Yu-Cheng; Wei, Hu; Xu, Jun; Lam, Tai Hing; Kwong, Yok Lam; Chen, Kexin; Xu, Caigang; Su, Yu-Chieh; Chiu, Brian C H; Ip, Kai Ming Dennis; Purdue, Mark P; Bassig, Bryan A; Rothman, Nat; Vermeulen, Roel

    2016-08-01

    In community-based epidemiological studies, job- and industry-specific 'modules' are often used to systematically obtain details about the subject's work tasks. The module assignment is often made by the interviewer, who may have insufficient occupational hygiene knowledge to assign the correct module. We evaluated, in the context of a case-control study of lymphoid neoplasms in Asia ('AsiaLymph'), the performance of an algorithm that provided automatic, real-time module assignment during a computer-assisted personal interview. AsiaLymph's occupational component began with a lifetime occupational history questionnaire with free-text responses and three solvent exposure screening questions. To assign each job to one of 23 study-specific modules, an algorithm automatically searched the free-text responses to the questions 'job title' and 'product made or services provided by employer' using a list of module-specific keywords, comprising over 5800 keywords in English, Traditional and Simplified Chinese. Hierarchical decision rules were used when the keyword match triggered multiple modules. If no keyword match was identified, a generic solvent module was assigned if the subject responded 'yes' to any of the three solvent screening questions. If these question responses were all 'no', a work location module was assigned, which redirected the subject to the farming, teaching, health professional, solvent, or industry solvent modules or ended the questions for that job, depending on the location response. We conducted a reliability assessment that compared the algorithm-assigned modules to consensus module assignments made by two industrial hygienists for a subset of 1251 (of 11409) jobs selected using a stratified random selection procedure using module-specific strata. Discordant assignments between the algorithm and consensus assignments (483 jobs) were qualitatively reviewed by the hygienists to evaluate the potential information lost from missed questions with using the algorithm-assigned module (none, low, medium, high). The most frequently assigned modules were the work location (33%), solvent (20%), farming and food industry (19%), and dry cleaning and textile industry (6.4%) modules. In the reliability subset, the algorithm assignment had an exact match to the expert consensus-assigned module for 722 (57.7%) of the 1251 jobs. Overall, adjusted for the proportion of jobs in each stratum, we estimated that 86% of the algorithm-assigned modules would result in no information loss, 2% would have low information loss, and 12% would have medium to high information loss. Medium to high information loss occurred for <10% of the jobs assigned the generic solvent module and for 21, 32, and 31% of the jobs assigned the work location module with location responses of 'someplace else', 'factory', and 'don't know', respectively. Other work location responses had ≤8% with medium to high information loss because of redirections to other modules. Medium to high information loss occurred more frequently when a job description matched with multiple keywords pointing to different modules (29-69%, depending on the triggered assignment rule). These evaluations demonstrated that automatically assigned modules can reliably reproduce an expert's module assignment without the direct involvement of an industrial hygienist or interviewer. The feasibility of adapting this framework to other studies will be language- and exposure-specific. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2016.

  3. Parameters Identification for Photovoltaic Module Based on an Improved Artificial Fish Swarm Algorithm

    PubMed Central

    Wang, Hong-Hua

    2014-01-01

    A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision. PMID:25243233

  4. Optimal design of the rotor geometry of line-start permanent magnet synchronous motor using the bat algorithm

    NASA Astrophysics Data System (ADS)

    Knypiński, Łukasz

    2017-12-01

    In this paper an algorithm for the optimization of excitation system of line-start permanent magnet synchronous motors will be presented. For the basis of this algorithm, software was developed in the Borland Delphi environment. The software consists of two independent modules: an optimization solver, and a module including the mathematical model of a synchronous motor with a self-start ability. The optimization module contains the bat algorithm procedure. The mathematical model of the motor has been developed in an Ansys Maxwell environment. In order to determine the functional parameters of the motor, additional scripts in Visual Basic language were developed. Selected results of the optimization calculation are presented and compared with results for the particle swarm optimization algorithm.

  5. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD Without ID: A Multi-site Study.

    PubMed

    Pugliese, Cara E; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L; Yerys, Benjamin E; Maddox, Brenna B; White, Susan W; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D; Schultz, Robert T; Martin, Alex; Anthony, Laura Gutermuth

    2015-12-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised algorithm demonstrated increased sensitivity, but lower specificity in the overall sample. Estimates were highest for females, individuals with a verbal IQ below 85 or above 115, and ages 16 and older. Best practice diagnostic procedures should include the Module 4 in conjunction with other assessment tools. Balancing needs for sensitivity and specificity depending on the purpose of assessment (e.g., clinical vs. research) and demographic characteristics mentioned above will enhance its utility.

  6. Evaluation of Automatically Assigned Job-Specific Interview Modules

    PubMed Central

    Friesen, Melissa C.; Lan, Qing; Ge, Calvin; Locke, Sarah J.; Hosgood, Dean; Fritschi, Lin; Sadkowsky, Troy; Chen, Yu-Cheng; Wei, Hu; Xu, Jun; Lam, Tai Hing; Kwong, Yok Lam; Chen, Kexin; Xu, Caigang; Su, Yu-Chieh; Chiu, Brian C. H.; Ip, Kai Ming Dennis; Purdue, Mark P.; Bassig, Bryan A.; Rothman, Nat; Vermeulen, Roel

    2016-01-01

    Objective: In community-based epidemiological studies, job- and industry-specific ‘modules’ are often used to systematically obtain details about the subject’s work tasks. The module assignment is often made by the interviewer, who may have insufficient occupational hygiene knowledge to assign the correct module. We evaluated, in the context of a case–control study of lymphoid neoplasms in Asia (‘AsiaLymph’), the performance of an algorithm that provided automatic, real-time module assignment during a computer-assisted personal interview. Methods: AsiaLymph’s occupational component began with a lifetime occupational history questionnaire with free-text responses and three solvent exposure screening questions. To assign each job to one of 23 study-specific modules, an algorithm automatically searched the free-text responses to the questions ‘job title’ and ‘product made or services provided by employer’ using a list of module-specific keywords, comprising over 5800 keywords in English, Traditional and Simplified Chinese. Hierarchical decision rules were used when the keyword match triggered multiple modules. If no keyword match was identified, a generic solvent module was assigned if the subject responded ‘yes’ to any of the three solvent screening questions. If these question responses were all ‘no’, a work location module was assigned, which redirected the subject to the farming, teaching, health professional, solvent, or industry solvent modules or ended the questions for that job, depending on the location response. We conducted a reliability assessment that compared the algorithm-assigned modules to consensus module assignments made by two industrial hygienists for a subset of 1251 (of 11409) jobs selected using a stratified random selection procedure using module-specific strata. Discordant assignments between the algorithm and consensus assignments (483 jobs) were qualitatively reviewed by the hygienists to evaluate the potential information lost from missed questions with using the algorithm-assigned module (none, low, medium, high). Results: The most frequently assigned modules were the work location (33%), solvent (20%), farming and food industry (19%), and dry cleaning and textile industry (6.4%) modules. In the reliability subset, the algorithm assignment had an exact match to the expert consensus-assigned module for 722 (57.7%) of the 1251 jobs. Overall, adjusted for the proportion of jobs in each stratum, we estimated that 86% of the algorithm-assigned modules would result in no information loss, 2% would have low information loss, and 12% would have medium to high information loss. Medium to high information loss occurred for <10% of the jobs assigned the generic solvent module and for 21, 32, and 31% of the jobs assigned the work location module with location responses of ‘someplace else’, ‘factory’, and ‘don’t know’, respectively. Other work location responses had ≤8% with medium to high information loss because of redirections to other modules. Medium to high information loss occurred more frequently when a job description matched with multiple keywords pointing to different modules (29–69%, depending on the triggered assignment rule). Conclusions: These evaluations demonstrated that automatically assigned modules can reliably reproduce an expert’s module assignment without the direct involvement of an industrial hygienist or interviewer. The feasibility of adapting this framework to other studies will be language- and exposure-specific. PMID:27250109

  7. A control strategy for PV stand-alone applications

    NASA Astrophysics Data System (ADS)

    Slouma, S.; Baccar, H.

    2015-04-01

    This paper proposes a stand-alone photovoltaic (PV) system study in domestic applications. Because of the decrease in power of photovoltaic module as a consequence of changes in solar radiation and temperature which affect the photovoltaic module performance, the design and control of DC-DC buck converter was proposed for providing power to the load from a photovoltaic source.In fact, the control of this converter is carried out with integrated MPPT (Maximum Power Point Tracking) algorithm which ensures a maximum energy generated by the PV arrays. Moreover, the output stage is composed by a battery energy storage system, dc-ac inverter, LCL filter which enables higher efficiency, low distortion ac waveforms and low leakage currents. The control strategy adopted is cascade control composed by two regulation loops.Simulations performed with PSIM software were able to validate the control system.The realization and testing of the photovoltaic system were achieved in the Photovoltaic laboratory of the Centre for Research and Energy Technologies at the Technopark Borj Cedria. Experimental results verify the effeciency of the proposed system.

  8. Range-azimuth decouple beamforming for frequency diverse array with Costas-sequence modulated frequency offsets

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Wang, Wen-Qin; Shao, Huaizong

    2016-12-01

    Different from the phased-array using the same carrier frequency for each transmit element, the frequency diverse array (FDA) uses a small frequency offset across the array elements to produce range-angle-dependent transmit beampattern. FDA radar provides new application capabilities and potentials due to its range-dependent transmit array beampattern, but the FDA using linearly increasing frequency offsets will produce a range and angle coupled transmit beampattern. In order to decouple the range-azimuth beampattern for FDA radar, this paper proposes a uniform linear array (ULA) FDA using Costas-sequence modulated frequency offsets to produce random-like energy distribution in the transmit beampattern and thumbtack transmit-receive beampattern. In doing so, the range and angle of targets can be unambiguously estimated through matched filtering and subspace decomposition algorithms in the receiver signal processor. Moreover, random-like energy distributed beampattern can also be utilized for low probability of intercept (LPI) radar applications. Numerical results show that the proposed scheme outperforms the standard FDA in focusing the transmit energy, especially in the range dimension.

  9. An efficient algorithm for pairwise local alignment of protein interaction networks

    DOE PAGES

    Chen, Wenbin; Schmidt, Matthew; Tian, Wenhong; ...

    2015-04-01

    Recently, researchers seeking to understand, modify, and create beneficial traits in organisms have looked for evolutionarily conserved patterns of protein interactions. Their conservation likely means that the proteins of these conserved functional modules are important to the trait's expression. In this paper, we formulate the problem of identifying these conserved patterns as a graph optimization problem, and develop a fast heuristic algorithm for this problem. We compare the performance of our network alignment algorithm to that of the MaWISh algorithm [Koyuturk M, Kim Y, Topkara U, Subramaniam S, Szpankowski W, Grama A, Pairwise alignment of protein interaction networks, J Computmore » Biol 13(2): 182-199, 2006.], which bases its search algorithm on a related decision problem formulation. We find that our algorithm discovers conserved modules with a larger number of proteins in an order of magnitude less time. In conclusion, the protein sets found by our algorithm correspond to known conserved functional modules at comparable precision and recall rates as those produced by the MaWISh algorithm.« less

  10. Toward a Predictive Model of Arctic Coastal Retreat in a Warming Climate, Beaufort Sea, Alaska

    DTIC Science & Technology

    2012-09-30

    Water level is modulated of the water level by waves and surge and tide. Melt rate is governed by an empirically based iceberg melting algorithm that...examination of enviornmental conditions, modified iceberg melting rules, and energy fluxes to the coast establish that water depth, water temperature and...photography, Arctic Alpine Antarctic Research 43(3): 474-484. (includes cover photo of this issue) Matell, N., R. S. Anderson, I. Overeem, C. Wobus

  11. Design and Implementation of a Smart LED Lighting System Using a Self Adaptive Weighted Data Fusion Algorithm

    PubMed Central

    Sung, Wen-Tsai; Lin, Jia-Syun

    2013-01-01

    This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.

  12. Sparse-view proton computed tomography using modulated proton beams.

    PubMed

    Lee, Jiseoc; Kim, Changhwan; Min, Byungjun; Kwak, Jungwon; Park, Seyjoon; Lee, Se Byeong; Park, Sungyong; Cho, Seungryong

    2015-02-01

    Proton imaging that uses a modulated proton beam and an intensity detector allows a relatively fast image acquisition compared to the imaging approach based on a trajectory tracking detector. In addition, it requires a relatively simple implementation in a conventional proton therapy equipment. The model of geometric straight ray assumed in conventional computed tomography (CT) image reconstruction is however challenged by multiple-Coulomb scattering and energy straggling in the proton imaging. Radiation dose to the patient is another important issue that has to be taken care of for practical applications. In this work, the authors have investigated iterative image reconstructions after a deconvolution of the sparsely view-sampled data to address these issues in proton CT. Proton projection images were acquired using the modulated proton beams and the EBT2 film as an intensity detector. Four electron-density cylinders representing normal soft tissues and bone were used as imaged object and scanned at 40 views that are equally separated over 360°. Digitized film images were converted to water-equivalent thickness by use of an empirically derived conversion curve. For improving the image quality, a deconvolution-based image deblurring with an empirically acquired point spread function was employed. They have implemented iterative image reconstruction algorithms such as adaptive steepest descent-projection onto convex sets (ASD-POCS), superiorization method-projection onto convex sets (SM-POCS), superiorization method-expectation maximization (SM-EM), and expectation maximization-total variation minimization (EM-TV). Performance of the four image reconstruction algorithms was analyzed and compared quantitatively via contrast-to-noise ratio (CNR) and root-mean-square-error (RMSE). Objects of higher electron density have been reconstructed more accurately than those of lower density objects. The bone, for example, has been reconstructed within 1% error. EM-based algorithms produced an increased image noise and RMSE as the iteration reaches about 20, while the POCS-based algorithms showed a monotonic convergence with iterations. The ASD-POCS algorithm outperformed the others in terms of CNR, RMSE, and the accuracy of the reconstructed relative stopping power in the region of lung and soft tissues. The four iterative algorithms, i.e., ASD-POCS, SM-POCS, SM-EM, and EM-TV, have been developed and applied for proton CT image reconstruction. Although it still seems that the images need to be improved for practical applications to the treatment planning, proton CT imaging by use of the modulated beams in sparse-view sampling has demonstrated its feasibility.

  13. SU-E-T-368: Evaluating Dosimetric Outcome of Modulated Photon Radiotherapy (XMRT) Optimization for Head and Neck Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGeachy, P; Villarreal-Barajas, JE; Khan, R

    2015-06-15

    Purpose: The dosimetric outcome of optimized treatment plans obtained by modulating the photon beamlet energy and fluence on a small cohort of four Head and Neck (H and N) patients was investigated. This novel optimization technique is denoted XMRT for modulated photon radiotherapy. The dosimetric plans from XMRT for H and N treatment were compared to conventional, 6 MV intensity modulated radiotherapy (IMRT) optimization plans. Methods: An arrangement of two non-coplanar and five coplanar beams was used for all four H and N patients. Both XMRT and IMRT were subject to the same optimization algorithm, with XMRT optimization allowing bothmore » 6 and 18 MV beamlets while IMRT was restricted to 6 MV only. The optimization algorithm was based on a linear programming approach with partial-volume constraints implemented via the conditional value-at-risk method. H and N constraints were based off of those mentioned in the Radiation Therapy Oncology Group 1016 protocol. XMRT and IMRT solutions were assessed using metrics suggested by International Commission on Radiation Units and Measurements report 83. The Gurobi solver was used in conjunction with the CVX package to solve each optimization problem. Dose calculations and analysis were done in CERR using Monte Carlo dose calculation with VMC{sub ++}. Results: Both XMRT and IMRT solutions met all clinical criteria. Trade-offs were observed between improved dose uniformity to the primary target volume (PTV1) and increased dose to some of the surrounding healthy organs for XMRT compared to IMRT. On average, IMRT improved dose to the contralateral parotid gland and spinal cord while XMRT improved dose to the brainstem and mandible. Conclusion: Bi-energy XMRT optimization for H and N patients provides benefits in terms of improved dose uniformity to the primary target and reduced dose to some healthy structures, at the expense of increased dose to other healthy structures when compared with IMRT.« less

  14. Evaluation of the ADOS Revised Algorithm: The Applicability in 558 Dutch Children and Adolescents

    ERIC Educational Resources Information Center

    de Bildt, Annelies; Sytema, Sjoerd; van Lang, Natasja D. J.; Minderaa, Ruud B.; van Engeland, Herman; de Jonge, Maretha V.

    2009-01-01

    The revised ADOS algorithms, proposed by Gotham et al. (J Autism Dev Disord 37:613-627, 2007), were investigated in an independent sample of 558 Dutch children (modules 1, 2 and 3). The revised algorithms lead to better balanced sensitivity and specificity for modules 2 and 3, without losing efficiency of the classification. Including the…

  15. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD without ID: A Multi-site Study

    PubMed Central

    Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L; Yerys, Benjamin E; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth

    2015-01-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised algorithm demonstrated increased sensitivity, but lower specificity in the overall sample. Estimates were highest for females, individuals with a verbal IQ below 85 or above 115, and ages 16 and older. Best practice diagnostic procedures should include the Module 4 in conjunction with other assessment tools. Balancing needs for sensitivity and specificity depending on the purpose of assessment (e.g., clinical vs. research) and demographic characteristics mentioned above will enhance its utility. PMID:26385796

  16. MT's algorithm: A new algorithm to search for the optimum set of modulation indices for simultaneous range, command, and telemetry

    NASA Technical Reports Server (NTRS)

    Nguyen, Tien Manh

    1989-01-01

    MT's algorithm was developed as an aid in the design of space telecommunications systems when utilized with simultaneous range/command/telemetry operations. This algorithm provides selection of modulation indices for: (1) suppression of undesired signals to achieve desired link performance margins and/or to allow for a specified performance degradation in the data channel (command/telemetry) due to the presence of undesired signals (interferers); and (2) optimum power division between the carrier, the range, and the data channel. A software program using this algorithm was developed for use with MathCAD software. This software program, called the MT program, provides the computation of optimum modulation indices for all possible cases that are recommended by the Consultative Committee on Space Data System (CCSDS) (with emphasis on the squarewave, NASA/JPL ranging system).

  17. Hardware architecture design of image restoration based on time-frequency domain computation

    NASA Astrophysics Data System (ADS)

    Wen, Bo; Zhang, Jing; Jiao, Zipeng

    2013-10-01

    The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.

  18. The effect of interference on delta modulation encoded video signals

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1979-01-01

    The results of a study on the use of the delta modulator as a digital encoder of television signals are presented. The computer simulation was studied of different delta modulators in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results are analyzed and then implemented in hardware to study the ability to encode real time motion pictures from an NTSC format television camera. The effects were investigated of channel errors on the delta modulated video signal and several error correction algorithms were tested via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. The final area of investigation concerned itself with finding delta modulators which could achieve significant bandwidth reduction without regard to complexity or speed. The first such scheme to be investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved two dimensional delta modulator algorithms.

  19. The application of the Luus-Jaakola direct search method to the optimization of a hybrid renewable energy system

    NASA Astrophysics Data System (ADS)

    Jatzeck, Bernhard Michael

    2000-10-01

    The application of the Luus-Jaakola direct search method to the optimization of stand-alone hybrid energy systems consisting of wind turbine generators (WTG's), photovoltaic (PV) modules, batteries, and an auxiliary generator was examined. The loads for these systems were for agricultural applications, with the optimization conducted on the basis of minimum capital, operating, and maintenance costs. Five systems were considered: two near Edmonton, Alberta, and one each near Lethbridge, Alberta, Victoria, British Columbia, and Delta, British Columbia. The optimization algorithm used hourly data for the load demand, WTG output power/area, and PV module output power. These hourly data were in two sets: seasonal (summer and winter values separated) and total (summer and winter values combined). The costs for the WTG's, PV modules, batteries, and auxiliary generator fuel were full market values. To examine the effects of price discounts or tax incentives, these values were lowered to 25% of the full costs for the energy sources and two-thirds of the full cost for agricultural fuel. Annual costs for a renewable energy system depended upon the load, location, component costs, and which data set (seasonal or total) was used. For one Edmonton load, the cost for a renewable energy system consisting of 27.01 m2 of WTG area, 14 PV modules, and 18 batteries (full price, total data set) was 6873/year. For Lethbridge, a system with 22.85 m2 of WTG area, 47 PV modules, and 5 batteries (reduced prices, seasonal data set) cost 2913/year. The performance of renewable energy systems based on the obtained results was tested in a simulation using load and weather data for selected days. Test results for one Edmonton load showed that the simulations for most of the systems examined ran for at least 17 hours per day before failing due to either an excessive load on the auxiliary generator or a battery constraint being violated. Additional testing indicated that increasing the generator capacity and reducing the maximum allowed battery charge current during the time of the day at which these failures occurred allowed the simulation to successfully operate.

  20. Efficient and accurate Greedy Search Methods for mining functional modules in protein interaction networks.

    PubMed

    He, Jieyue; Li, Chaojun; Ye, Baoliu; Zhong, Wei

    2012-06-25

    Most computational algorithms mainly focus on detecting highly connected subgraphs in PPI networks as protein complexes but ignore their inherent organization. Furthermore, many of these algorithms are computationally expensive. However, recent analysis indicates that experimentally detected protein complexes generally contain Core/attachment structures. In this paper, a Greedy Search Method based on Core-Attachment structure (GSM-CA) is proposed. The GSM-CA method detects densely connected regions in large protein-protein interaction networks based on the edge weight and two criteria for determining core nodes and attachment nodes. The GSM-CA method improves the prediction accuracy compared to other similar module detection approaches, however it is computationally expensive. Many module detection approaches are based on the traditional hierarchical methods, which is also computationally inefficient because the hierarchical tree structure produced by these approaches cannot provide adequate information to identify whether a network belongs to a module structure or not. In order to speed up the computational process, the Greedy Search Method based on Fast Clustering (GSM-FC) is proposed in this work. The edge weight based GSM-FC method uses a greedy procedure to traverse all edges just once to separate the network into the suitable set of modules. The proposed methods are applied to the protein interaction network of S. cerevisiae. Experimental results indicate that many significant functional modules are detected, most of which match the known complexes. Results also demonstrate that the GSM-FC algorithm is faster and more accurate as compared to other competing algorithms. Based on the new edge weight definition, the proposed algorithm takes advantages of the greedy search procedure to separate the network into the suitable set of modules. Experimental analysis shows that the identified modules are statistically significant. The algorithm can reduce the computational time significantly while keeping high prediction accuracy.

  1. Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation

    NASA Astrophysics Data System (ADS)

    Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari

    2016-07-01

    In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.

  2. Simulation of a cascaded longitudinal space charge amplifier for coherent radiation generation

    DOE PAGES

    Halavanau, A.; Piot, P.

    2016-03-03

    Longitudinal space charge (LSC) effects are generally considered as harmful in free-electron lasers as they can seed unfavorable energy modulations that can result in density modulations with associated emittance dilution. It was pointed out, however, that such \\micro-bunching instabilities" could be potentially useful to support the generation of broadband coherent radiation. Therefore there has been an increasing interest in devising accelerator beam lines capable of controlling LSC induced density modulations. In the present paper we augment these previous investigations by combining a grid-less space charge algorithm with the popular particle-tracking program elegant. This high-fidelity model of the space charge ismore » used to benchmark conventional LSC models. We then employ the developed model to optimize the performance of a cascaded longitudinal space charge amplifier using beam parameters comparable to the ones achievable at Fermilab Accelerator Science & Technology (FAST) facility currently under commissioning at Fermilab.« less

  3. The Autism Diagnostic Observation Schedule, Module 4: Application of the Revised Algorithms in an Independent, Well-Defined, Dutch Sample (N = 93)

    ERIC Educational Resources Information Center

    de Bildt, Annelies; Sytema, Sjoerd; Meffert, Harma; Bastiaansen, Jojanneke A. C. J.

    2016-01-01

    This study examined the discriminative ability of the revised Autism Diagnostic Observation Schedule module 4 algorithm (Hus and Lord in "J Autism Dev Disord" 44(8):1996-2012, 2014) in 93 Dutch males with Autism Spectrum Disorder (ASD), schizophrenia, psychopathy or controls. Discriminative ability of the revised algorithm ASD cut-off…

  4. Quantum-behaved particle swarm optimization for the synthesis of fibre Bragg gratings filter

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Sun, Yunxu; Yao, Yong; Tian, Jiajun; Cong, Shan

    2011-12-01

    A method based on the quantum-behaved particle swarm optimization algorithm is presented to design a bandpass filter of the fibre Bragg gratings. In contrast to the other optimization algorithms such as the genetic algorithm and particle swarm optimization algorithm, this method is simpler and easier to implement. To demonstrate the effectiveness of the QPSO algorithm, we consider a bandpass filter. With the parameters the half the bandwidth of the filter 0.05 nm, the Bragg wavelength 1550 nm, the grating length with 2cm is divided into 40 uniform sections and its index modulation is what should be optimized and whole feasible solution space is searched for the index modulation. After the index modulation profile is known for all the sections, the transfer matrix method is used to verify the final optimal index modulation by calculating the refection spectrum. The results show the group delay is less than 12ps in band and the calculated dispersion is relatively flat inside the passband. It is further found that the reflective spectrum has sidelobes around -30dB and the worst in-band dispersion value is less than 200ps/nm . In addition, for this design, it takes approximately several minutes to find the acceptable index modulation values with a notebook computer.

  5. Rapid methods for radionuclide contaminant transport in nuclear fuel cycle simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huff, Kathryn

    Here, nuclear fuel cycle and nuclear waste disposal decisions are technologically coupled. However, current nuclear fuel cycle simulators lack dynamic repository performance analysis due to the computational burden of high-fidelity hydrolgic contaminant transport models. The Cyder disposal environment and repository module was developed to fill this gap. It implements medium-fidelity hydrologic radionuclide transport models to support assessment appropriate for fuel cycle simulation in the Cyclus fuel cycle simulator. Rapid modeling of hundreds of discrete waste packages in a geologic environment is enabled within this module by a suite of four closed form models for advective, dispersive, coupled, and idealized con-more » taminant transport: a Degradation Rate model, a Mixed Cell model, a Lumped Parameter model, and a 1-D Permeable Porous Medium model. A summary of the Cyder module, its timestepping algorithm, and the mathematical models implemented within it are presented. Additionally, parametric demonstrations simulations performed with Cyder are presented and shown to demonstrate functional agreement with parametric simulations conducted in a standalone hydrologic transport model, the Clay Generic Disposal System Model developed by the Used Fuel Disposition Campaign Department of Energy Office of Nuclear Energy.« less

  6. Rapid methods for radionuclide contaminant transport in nuclear fuel cycle simulation

    DOE PAGES

    Huff, Kathryn

    2017-08-01

    Here, nuclear fuel cycle and nuclear waste disposal decisions are technologically coupled. However, current nuclear fuel cycle simulators lack dynamic repository performance analysis due to the computational burden of high-fidelity hydrolgic contaminant transport models. The Cyder disposal environment and repository module was developed to fill this gap. It implements medium-fidelity hydrologic radionuclide transport models to support assessment appropriate for fuel cycle simulation in the Cyclus fuel cycle simulator. Rapid modeling of hundreds of discrete waste packages in a geologic environment is enabled within this module by a suite of four closed form models for advective, dispersive, coupled, and idealized con-more » taminant transport: a Degradation Rate model, a Mixed Cell model, a Lumped Parameter model, and a 1-D Permeable Porous Medium model. A summary of the Cyder module, its timestepping algorithm, and the mathematical models implemented within it are presented. Additionally, parametric demonstrations simulations performed with Cyder are presented and shown to demonstrate functional agreement with parametric simulations conducted in a standalone hydrologic transport model, the Clay Generic Disposal System Model developed by the Used Fuel Disposition Campaign Department of Energy Office of Nuclear Energy.« less

  7. Single-pixel imaging based on compressive sensing with spectral-domain optical mixing

    NASA Astrophysics Data System (ADS)

    Zhu, Zhijing; Chi, Hao; Jin, Tao; Zheng, Shilie; Jin, Xiaofeng; Zhang, Xianmin

    2017-11-01

    In this letter a single-pixel imaging structure is proposed based on compressive sensing using a spatial light modulator (SLM)-based spectrum shaper. In the approach, an SLM-based spectrum shaper, the pattern of which is a predetermined pseudorandom bit sequence (PRBS), spectrally codes the optical pulse carrying image information. The energy of the spectrally mixed pulse is detected by a single-pixel photodiode and the measurement results are used to reconstruct the image via a sparse recovery algorithm. As the mixing of the image signal and the PRBS is performed in the spectral domain, optical pulse stretching, modulation, compression and synchronization in the time domain are avoided. Experiments are implemented to verify the feasibility of the approach.

  8. Research on Image Encryption Based on DNA Sequence and Chaos Theory

    NASA Astrophysics Data System (ADS)

    Tian Zhang, Tian; Yan, Shan Jun; Gu, Cheng Yan; Ren, Ran; Liao, Kai Xin

    2018-04-01

    Nowadays encryption is a common technique to protect image data from unauthorized access. In recent years, many scientists have proposed various encryption algorithms based on DNA sequence to provide a new idea for the design of image encryption algorithm. Therefore, a new method of image encryption based on DNA computing technology is proposed in this paper, whose original image is encrypted by DNA coding and 1-D logistic chaotic mapping. First, the algorithm uses two modules as the encryption key. The first module uses the real DNA sequence, and the second module is made by one-dimensional logistic chaos mapping. Secondly, the algorithm uses DNA complementary rules to encode original image, and uses the key and DNA computing technology to compute each pixel value of the original image, so as to realize the encryption of the whole image. Simulation results show that the algorithm has good encryption effect and security.

  9. An ATR architecture for algorithm development and testing

    NASA Astrophysics Data System (ADS)

    Breivik, Gøril M.; Løkken, Kristin H.; Brattli, Alvin; Palm, Hans C.; Haavardsholm, Trym

    2013-05-01

    A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development. In this paper we present an architecture for ATR systems that is designed to be exible, generic and efficient. The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations. A software framework in C++ that handles large data ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.

  10. Energy- and Intensity-Modulated Electron Beam for Breast Cancer Treatment

    DTIC Science & Technology

    1999-10-01

    calculations," in Teletherapy: Present and Future, Ed. By T.R. Mackie and J.R. Palta (Advanced Medical Publishing, Madison WI) Mackie TR, Reckwerdt PJ...edited by T. R. Mackie and J. R. Palta from 10% to 20% (or a 5-20 mm shift in the isodose lines) (Advanced Medical Publishing, Madison, WI, 1996). to...Ayyangar K, Palta J R, Sweet J W and Suntharalingam N 1993 Experimental verification of a three-dimensional dose calculation algorithm using a specially

  11. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  12. Designing a range modulator wheel to spread-out the Bragg peak for a passive proton therapy facility

    NASA Astrophysics Data System (ADS)

    Jia, S. Bijan; Romano, F.; Cirrone, Giuseppe A. P.; Cuttone, G.; Hadizadeh, M. H.; Mowlavi, A. A.; Raffaele, L.

    2016-01-01

    In proton beam therapy, a Spread-Out Bragg peak (SOBP) is used to establish a uniform dose distribution in the target volume. In order to create a SOBP, several Bragg peaks of different ranges, corresponding to different entrance energies, with certain intensities (weights) should be combined each other. In a passive beam scattering system, the beam is usually extracted from a cyclotron at a constant energy throughout a treatment. Therefore, a SOBP is produced by a range modulator wheel, which is basically a rotating wheel with steps of variable thicknesses, or by using the ridge filters. In this study, we used the Geant4 toolkit to simulate a typical passive scattering beam line. In particular, the CATANA transport beam line of INFN Laboratori Nazionali del Sud (LNS) in Catania has been reproduced in this work. Some initial properties of the entrance beam have been checked by benchmarking simulations with experimental data. A class dedicated to the simulation of the wheel modulators has been implemented. It has been designed in order to be easily modified for simulating any desired modulator wheel and, hence, any suitable beam modulation. By using some auxiliary range-shifters, a set of pristine Bragg peaks was obtained from the simulations. A mathematical algorithm was developed, using the simulated pristine dose profiles as its input, to calculate the weight of each pristine peak, reproduce the SOBP, and finally generate a flat dose distribution. Therefore, once the designed modulator has been realized, it has been tested at CATANA facility, comparing the experimental data with the simulation results.

  13. Decoupled Modulation Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Shaobu; Huang, Renke; Huang, Zhenyu

    The objective of this research work is to develop decoupled modulation control methods for damping inter-area oscillations with low frequencies, so the damping control can be more effective and easier to design with less interference among different oscillation modes in the power system. A signal-decoupling algorithm was developed that can enable separation of multiple oscillation frequency contents and extraction of a “pure” oscillation frequency mode that are fed into Power System Stabilizers (PSSs) as the modulation input signals. As a result, instead of introducing interferences between different oscillation modes from the traditional approaches, the output of the new PSS modulationmore » control signal mainly affects only one oscillation mode of interest. The new decoupled modulation damping control algorithm has been successfully developed and tested on the standard IEEE 4-machine 2-area test system and a minniWECC system. The results are compared against traditional modulation controls, which demonstrates the validity and effectiveness of the newly-developed decoupled modulation damping control algorithm.« less

  14. HDL Based FPGA Interface Library for Data Acquisition and Multipurpose Real Time Algorithms

    NASA Astrophysics Data System (ADS)

    Fernandes, Ana M.; Pereira, R. C.; Sousa, J.; Batista, A. J. N.; Combo, A.; Carvalho, B. B.; Correia, C. M. B. A.; Varandas, C. A. F.

    2011-08-01

    The inherent parallelism of the logic resources, the flexibility in its configuration and the performance at high processing frequencies makes the field programmable gate array (FPGA) the most suitable device to be used both for real time algorithm processing and data transfer in instrumentation modules. Moreover, the reconfigurability of these FPGA based modules enables exploiting different applications on the same module. When using a reconfigurable module for various applications, the availability of a common interface library for easier implementation of the algorithms on the FPGA leads to more efficient development. The FPGA configuration is usually specified in a hardware description language (HDL) or other higher level descriptive language. The critical paths, such as the management of internal hardware clocks that require deep knowledge of the module behavior shall be implemented in HDL to optimize the timing constraints. The common interface library should include these critical paths, freeing the application designer from hardware complexity and able to choose any of the available high-level abstraction languages for the algorithm implementation. With this purpose a modular Verilog code was developed for the Virtex 4 FPGA of the in-house Transient Recorder and Processor (TRP) hardware module, based on the Advanced Telecommunications Computing Architecture (ATCA), with eight channels sampling at up to 400 MSamples/s (MSPS). The TRP was designed to perform real time Pulse Height Analysis (PHA), Pulse Shape Discrimination (PSD) and Pile-Up Rejection (PUR) algorithms at a high count rate (few Mevent/s). A brief description of this modular code is presented and examples of its use as an interface with end user algorithms, including a PHA with PUR, are described.

  15. Semantic integration to identify overlapping functional modules in protein interaction networks

    PubMed Central

    Cho, Young-Rae; Hwang, Woochang; Ramanathan, Murali; Zhang, Aidong

    2007-01-01

    Background The systematic analysis of protein-protein interactions can enable a better understanding of cellular organization, processes and functions. Functional modules can be identified from the protein interaction networks derived from experimental data sets. However, these analyses are challenging because of the presence of unreliable interactions and the complex connectivity of the network. The integration of protein-protein interactions with the data from other sources can be leveraged for improving the effectiveness of functional module detection algorithms. Results We have developed novel metrics, called semantic similarity and semantic interactivity, which use Gene Ontology (GO) annotations to measure the reliability of protein-protein interactions. The protein interaction networks can be converted into a weighted graph representation by assigning the reliability values to each interaction as a weight. We presented a flow-based modularization algorithm to efficiently identify overlapping modules in the weighted interaction networks. The experimental results show that the semantic similarity and semantic interactivity of interacting pairs were positively correlated with functional co-occurrence. The effectiveness of the algorithm for identifying modules was evaluated using functional categories from the MIPS database. We demonstrated that our algorithm had higher accuracy compared to other competing approaches. Conclusion The integration of protein interaction networks with GO annotation data and the capability of detecting overlapping modules substantially improve the accuracy of module identification. PMID:17650343

  16. Design of a modulated orthovoltage stereotactic radiosurgery system.

    PubMed

    Fagerstrom, Jessica M; Bender, Edward T; Lawless, Michael J; Culberson, Wesley S

    2017-07-01

    To achieve stereotactic radiosurgery (SRS) dose distributions with sharp gradients using orthovoltage energy fluence modulation with inverse planning optimization techniques. A pencil beam model was used to calculate dose distributions from an orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods. A Genetic Algorithm search heuristic was used to optimize the spatial distribution of added tungsten filtration to achieve dose distributions with sharp dose gradients. Optimizations were performed for depths of 2.5, 5.0, and 7.5 cm, with cone sizes of 5, 6, 8, and 10 mm. In addition to the beam profiles, 4π isocentric irradiation geometries were modeled to examine dose at 0.07 mm depth, a representative skin depth, for the low energy beams. Profiles from 4π irradiations of a constant target volume, assuming maximally conformal coverage, were compared. Finally, dose deposition in bone compared to tissue in this energy range was examined. Based on the results of the optimization, circularly symmetric tungsten filters were designed to modulate the orthovoltage beam across the apertures of SRS cone collimators. For each depth and cone size combination examined, the beam flatness and 80-20% and 90-10% penumbrae were calculated for both standard, open cone-collimated beams as well as for optimized, filtered beams. For all configurations tested, the modulated beam profiles had decreased penumbra widths and flatness statistics at depth. Profiles for the optimized, filtered orthovoltage beams also offered decreases in these metrics compared to measured linear accelerator cone-based SRS profiles. The dose at 0.07 mm depth in the 4π isocentric irradiation geometries was higher for the modulated beams compared to unmodulated beams; however, the modulated dose at 0.07 mm depth remained <0.025% of the central, maximum dose. The 4π profiles irradiating a constant target volume showed improved statistics for the modulated, filtered distribution compared to the standard, open cone-collimated distribution. Simulations of tissue and bone confirmed previously published results that a higher energy beam (≥ 200 keV) would be preferable, but the 250 kVp beam was chosen for this work because it is available for future measurements. A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions with decreased flatness and penumbra statistics compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system. © 2017 American Association of Physicists in Medicine.

  17. Improvement of energy efficiency via spectrum optimization of excitation sequence for multichannel simultaneously triggered airborne sonar system

    NASA Astrophysics Data System (ADS)

    Meng, Qing-Hao; Yao, Zhen-Jing; Peng, Han-Yang

    2009-12-01

    Both the energy efficiency and correlation characteristics are important in airborne sonar systems to realize multichannel ultrasonic transducers working together. High energy efficiency can increase echo energy and measurement range, and sharp autocorrelation and flat cross correlation can help eliminate cross-talk among multichannel transducers. This paper addresses energy efficiency optimization under the premise that cross-talk between different sonar transducers can be avoided. The nondominated sorting genetic algorithm-II is applied to optimize both the spectrum and correlation characteristics of the excitation sequence. The central idea of the spectrum optimization is to distribute most of the energy of the excitation sequence within the frequency band of the sonar transducer; thus, less energy is filtered out by the transducers. Real experiments show that a sonar system consisting of eight-channel Polaroid 600 series electrostatic transducers excited with 2 ms optimized pulse-position-modulation sequences can work together without cross-talk and can measure distances up to 650 cm with maximal 1% relative error.

  18. Design of Intelligent Hydraulic Excavator Control System Based on PID Method

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Jiao, Shengjie; Liao, Xiaoming; Yin, Penglong; Wang, Yulin; Si, Kuimao; Zhang, Yi; Gu, Hairong

    Most of the domestic designed hydraulic excavators adopt the constant power design method and set 85%~90% of engine power as the hydraulic system adoption power, it causes high energy loss due to mismatching of power between the engine and the pump. While the variation of the rotational speed of engine could sense the power shift of the load, it provides a new method to adjust the power matching between engine and pump through engine speed. Based on negative flux hydraulic system, an intelligent hydraulic excavator control system was designed based on rotational speed sensing method to improve energy efficiency. The control system was consisted of engine control module, pump power adjusted module, engine idle module and system fault diagnosis module. Special PLC with CAN bus was used to acquired the sensors and adjusts the pump absorption power according to load variation. Four energy saving control strategies with constant power method were employed to improve the fuel utilization. Three power modes (H, S and L mode) were designed to meet different working status; Auto idle function was employed to save energy through two work status detected pressure switches, 1300rpm was setting as the idle speed according to the engine consumption fuel curve. Transient overload function was designed for deep digging within short time without spending extra fuel. An increasing PID method was employed to realize power matching between engine and pump, the rotational speed's variation was taken as the PID algorithm's input; the current of proportional valve of variable displacement pump was the PID's output. The result indicated that the auto idle could decrease fuel consumption by 33.33% compared to work in maximum speed of H mode, the PID control method could take full use of maximum engine power at each power mode and keep the engine speed at stable range. Application of rotational speed sensing method provides a reliable method to improve the excavator's energy efficiency and realize power match between pump and engine.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beltran, C; Kamal, H

    Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less

  20. Direct position determination for digital modulation signals based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding

    2018-04-01

    The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.

  1. Transitioning the California Energy Commission Eligible Equipment List to a National Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Truitt, Sarah; Nobler, Erin; Krasko, Vitaliy

    The Energy Commission called on the National Renewable Energy Laboratory's (NREL)'s Solar Technical Assistance Team to explore various pathways for supporting continued evolution of the list. NREL staff utilized the Database of State Incentives for Renewables and Efficiency (DSIRE), California Solar Initiative (CSI) data, and information from in-depth interviews to better understand the impact of a lack of an updated list and suggest potential solutions. A total of 18 people from state energy offices, rebate program administrators, utilities, national testing laboratories, private companies, nonprofit organizations, and the federal government were interviewed between July and September 2013. CSI data were analyzedmore » to illustrate the monetary benefits of the algorithm behind calculating performance of PV modules included on the list. The primary objectives of this study are to: 1) Determine the impact of not maintaining the list, and 2) Explore alternatives to the State of California's maintenance of the list.« less

  2. Dosimetric Evaluation of Metal Artefact Reduction using Metal Artefact Reduction (MAR) Algorithm and Dual-energy Computed Tomography (CT) Method

    NASA Astrophysics Data System (ADS)

    Laguda, Edcer Jerecho

    Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient's medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method. Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated. Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated. Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.

  3. Statistical process control using optimized neural networks: a case study.

    PubMed

    Addeh, Jalil; Ebrahimzadeh, Ata; Azarbad, Milad; Ranaee, Vahid

    2014-09-01

    The most common statistical process control (SPC) tools employed for monitoring process changes are control charts. A control chart demonstrates that the process has altered by generating an out-of-control signal. This study investigates the design of an accurate system for the control chart patterns (CCPs) recognition in two aspects. First, an efficient system is introduced that includes two main modules: feature extraction module and classifier module. In the feature extraction module, a proper set of shape features and statistical feature are proposed as the efficient characteristics of the patterns. In the classifier module, several neural networks, such as multilayer perceptron, probabilistic neural network and radial basis function are investigated. Based on an experimental study, the best classifier is chosen in order to recognize the CCPs. Second, a hybrid heuristic recognition system is introduced based on cuckoo optimization algorithm (COA) algorithm to improve the generalization performance of the classifier. The simulation results show that the proposed algorithm has high recognition accuracy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Coding and decoding for code division multiple user communication systems

    NASA Technical Reports Server (NTRS)

    Healy, T. J.

    1985-01-01

    A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.

  5. Digital pulse shape discrimination methods for n-γ separation in an EJ-301 liquid scintillation detector

    NASA Astrophysics Data System (ADS)

    Wan, Bo; Zhang, Xue-Ying; Chen, Liang; Ge, Hong-Lin; Ma, Fei; Zhang, Hong-Bin; Ju, Yong-Qin; Zhang, Yan-Bin; Li, Yan-Yan; Xu, Xiao-Wei

    2015-11-01

    A digital pulse shape discrimination system based on a programmable module NI-5772 has been established and tested with an EJ-301 liquid scintillation detector. The module was operated by running programs developed in LabVIEW, with a sampling frequency up to 1.6 GS/s. Standard gamma sources 22Na, 137Cs and 60Co were used to calibrate the EJ-301 liquid scintillation detector, and the gamma response function was obtained. Digital algorithms for the charge comparison method and zero-crossing method have been developed. The experimental results show that both digital signal processing (DSP) algorithms can discriminate neutrons from γ-rays. Moreover, the zero-crossing method shows better n-γ discrimination at 80 keVee and lower, whereas the charge comparison method gives better results at higher thresholds. In addition, the figure-of-merit (FOM) for detectors of two different dimensions were extracted at 9 energy thresholds, and it was found that the smaller detector presented better n-γ separation for fission neutrons. Supported by National Natural Science Foundation of China (91226107, 11305229) and the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA03030300)

  6. TU-H-BRC-05: Stereotactic Radiosurgery Optimized with Orthovoltage Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fagerstrom, J; Culberson, W; Bender, E

    2016-06-15

    Purpose: To achieve improved stereotactic radiosurgery (SRS) dose distributions using orthovoltage energy fluence modulation with inverse planning optimization techniques. Methods: A pencil beam model was used to calculate dose distributions from the institution’s orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods as well as measurements with radiochromic film. The orthovoltage photon spectra, modulated by varying thicknesses of attenuating material, were approximated using open-source software. A genetic algorithm search heuristic routine was used to optimize added tungsten filtration thicknesses to approach rectangular function dose distributions at depth. Optimizations were performed for depths of 2.5,more » 5.0, and 7.5 cm, with cone sizes of 8, 10, and 12 mm. Results: Circularly-symmetric tungsten filters were designed based on the results of the optimization, to modulate the orthovoltage beam across the aperture of an SRS cone collimator. For each depth and cone size combination examined, the beam flatness and 80–20% and 90–10% penumbrae were calculated for both standard, open cone-collimated beams as well as for the optimized, filtered beams. For all configurations tested, the modulated beams were able to achieve improved penumbra widths and flatness statistics at depth, with flatness improving between 33 and 52%, and penumbrae improving between 18 and 25% for the modulated beams compared to the unmodulated beams. Conclusion: A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions at depth with improved flatness and penumbrae compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system.« less

  7. A parameter estimation algorithm for LFM/BPSK hybrid modulated signal intercepted by Nyquist folding receiver

    NASA Astrophysics Data System (ADS)

    Qiu, Zhaoyang; Wang, Pei; Zhu, Jun; Tang, Bin

    2016-12-01

    Nyquist folding receiver (NYFR) is a novel ultra-wideband receiver architecture which can realize wideband receiving with a small amount of equipment. Linear frequency modulated/binary phase shift keying (LFM/BPSK) hybrid modulated signal is a novel kind of low probability interception signal with wide bandwidth. The NYFR is an effective architecture to intercept the LFM/BPSK signal and the LFM/BPSK signal intercepted by the NYFR will add the local oscillator modulation. A parameter estimation algorithm for the NYFR output signal is proposed. According to the NYFR prior information, the chirp singular value ratio spectrum is proposed to estimate the chirp rate. Then, based on the output self-characteristic, matching component function is designed to estimate Nyquist zone (NZ) index. Finally, matching code and subspace method are employed to estimate the phase change points and code length. Compared with the existing methods, the proposed algorithm has a better performance. It also has no need to construct a multi-channel structure, which means the computational complexity for the NZ index estimation is small. The simulation results demonstrate the efficacy of the proposed algorithm.

  8. Towards multifocal ultrasonic neural stimulation: pattern generation algorithms

    NASA Astrophysics Data System (ADS)

    Hertzberg, Yoni; Naor, Omer; Volovick, Alexander; Shoham, Shy

    2010-10-01

    Focused ultrasound (FUS) waves directed onto neural structures have been shown to dynamically modulate neural activity and excitability, opening up a range of possible systems and applications where the non-invasiveness, safety, mm-range resolution and other characteristics of FUS are advantageous. As in other neuro-stimulation and modulation modalities, the highly distributed and parallel nature of neural systems and neural information processing call for the development of appropriately patterned stimulation strategies which could simultaneously address multiple sites in flexible patterns. Here, we study the generation of sparse multi-focal ultrasonic distributions using phase-only modulation in ultrasonic phased arrays. We analyse the relative performance of an existing algorithm for generating multifocal ultrasonic distributions and new algorithms that we adapt from the field of optical digital holography, and find that generally the weighted Gerchberg-Saxton algorithm leads to overall superior efficiency and uniformity in the focal spots, without significantly increasing the computational burden. By combining phased-array FUS and magnetic-resonance thermometry we experimentally demonstrate the simultaneous generation of tightly focused multifocal distributions in a tissue phantom, a first step towards patterned FUS neuro-modulation systems and devices.

  9. Underwater acoustic wireless sensor networks: advances and future trends in physical, MAC and routing layers.

    PubMed

    Climent, Salvador; Sanchez, Antonio; Capella, Juan Vicente; Meratnia, Nirvana; Serrano, Juan Jose

    2014-01-06

    This survey aims to provide a comprehensive overview of the current research on underwater wireless sensor networks, focusing on the lower layers of the communication stack, and envisions future trends and challenges. It analyzes the current state-of-the-art on the physical, medium access control and routing layers. It summarizes their security threads and surveys the currently proposed studies. Current envisioned niches for further advances in underwater networks research range from efficient, low-power algorithms and modulations to intelligent, energy-aware routing and medium access control protocols.

  10. HEPMath 1.4: A mathematica package for semi-automatic computations in high energy physics

    NASA Astrophysics Data System (ADS)

    Wiebusch, Martin

    2015-10-01

    This article introduces the Mathematica package HEPMath which provides a number of utilities and algorithms for High Energy Physics computations in Mathematica. Its functionality is similar to packages like FormCalc or FeynCalc, but it takes a more complete and extensible approach to implementing common High Energy Physics notations in the Mathematica language, in particular those related to tensors and index contractions. It also provides a more flexible method for the generation of numerical code which is based on new features for C code generation in Mathematica. In particular it can automatically generate Python extension modules which make the compiled functions callable from Python, thus eliminating the need to write any code in a low-level language like C or Fortran. It also contains seamless interfaces to LHAPDF, FeynArts, and LoopTools.

  11. Pressure modulation algorithm to separate cerebral hemodynamic signals from extracerebral artifacts.

    PubMed

    Baker, Wesley B; Parthasarathy, Ashwin B; Ko, Tiffany S; Busch, David R; Abramson, Kenneth; Tzeng, Shih-Yu; Mesquita, Rickson C; Durduran, Turgut; Greenberg, Joel H; Kung, David K; Yodh, Arjun G

    2015-07-01

    We introduce and validate a pressure measurement paradigm that reduces extracerebral contamination from superficial tissues in optical monitoring of cerebral blood flow with diffuse correlation spectroscopy (DCS). The scheme determines subject-specific contributions of extracerebral and cerebral tissues to the DCS signal by utilizing probe pressure modulation to induce variations in extracerebral blood flow. For analysis, the head is modeled as a two-layer medium and is probed with long and short source-detector separations. Then a combination of pressure modulation and a modified Beer-Lambert law for flow enables experimenters to linearly relate differential DCS signals to cerebral and extracerebral blood flow variation without a priori anatomical information. We demonstrate the algorithm's ability to isolate cerebral blood flow during a finger-tapping task and during graded scalp ischemia in healthy adults. Finally, we adapt the pressure modulation algorithm to ameliorate extracerebral contamination in monitoring of cerebral blood oxygenation and blood volume by near-infrared spectroscopy.

  12. Phase retrieval by coherent modulation imaging.

    PubMed

    Zhang, Fucai; Chen, Bo; Morrison, Graeme R; Vila-Comamala, Joan; Guizar-Sicairos, Manuel; Robinson, Ian K

    2016-11-18

    Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single-diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit wave. This coherent modulation imaging method removes inherent ambiguities of coherent diffraction imaging and uses a reliable, rapidly converging iterative algorithm involving three planes. It works for extended samples, does not require tight support for convergence and relaxes dynamic range requirements on the detector. Coherent modulation imaging provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free-electron lasers.

  13. Fringe pattern information retrieval using wavelets

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Patimo, Caterina; Manicone, Pasquale D.; Lamberti, Luciano

    2005-08-01

    Two-dimensional phase modulation is currently the basic model used in the interpretation of fringe patterns that contain displacement information, moire, holographic interferometry, speckle techniques. Another way to look to these two-dimensional signals is to consider them as frequency modulated signals. This alternative interpretation has practical implications similar to those that exist in radio engineering for handling frequency modulated signals. Utilizing this model it is possible to obtain frequency information by using the energy approach introduced by Ville in 1944. A natural complementary tool of this process is the wavelet methodology. The use of wavelet makes it possible to obtain the local values of the frequency in a one or two dimensional domain without the need of previous phase retrieval and differentiation. Furthermore from the properties of wavelets it is also possible to obtain at the same time the phase of the signal with the advantage of a better noise removal capabilities and the possibility of developing simpler algorithms for phase unwrapping due to the availability of the derivative of the phase.

  14. Optimal signal constellation design for ultra-high-speed optical transport in the presence of nonlinear phase noise.

    PubMed

    Liu, Tao; Djordjevic, Ivan B

    2014-12-29

    In this paper, we first describe an optimal signal constellation design algorithm suitable for the coherent optical channels dominated by the linear phase noise. Then, we modify this algorithm to be suitable for the nonlinear phase noise dominated channels. In optimization procedure, the proposed algorithm uses the cumulative log-likelihood function instead of the Euclidian distance. Further, an LDPC coded modulation scheme is proposed to be used in combination with signal constellations obtained by proposed algorithm. Monte Carlo simulations indicate that the LDPC-coded modulation schemes employing the new constellation sets, obtained by our new signal constellation design algorithm, outperform corresponding QAM constellations significantly in terms of transmission distance and have better nonlinearity tolerance.

  15. Energy performance and savings potentials with skylights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arasteh, D.; Johnson, R.; Selkowitz, S.

    1984-12-01

    This study systematically explores the energy effects of skylight systems in a prototypical office building module and examines the savings from daylighting. For specific climates, roof/skylight characteristics are identified that minimize total energy or peak electrical demand. Simplified techniques for energy performance calculation are also presented based on a multiple regression analysis of our data base so that one may easily evaluate daylighting's effects on total and component energy loads and electrical peaks. This provides additional insights into the influence of skylight parameters on energy consumption and electrical peaks. We use the DOE-2.1B energy analysis program with newly incorporated daylightingmore » algorithms to determine hourly, monthly, and annual impacts of daylighting strategies on electrical lighting consumption, cooling, heating, fan power, peak electrical demands, and total energy use. A data base of more than 2000 parametric simulations for 14 US climates has been generated. Parameters varied include skylight-to-roof ratio, shading coefficient, visible transmittance, skylight well light loss, electric lighting power density, roof heat transfer coefficient, and electric lighting control type. 14 references, 13 figures, 4 tables.« less

  16. Optimization-based scatter estimation using primary modulation for computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao

    Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less

  17. Network-Based Disease Module Discovery by a Novel Seed Connector Algorithm with Pathobiological Implications.

    PubMed

    Wang, Rui-Sheng; Loscalzo, Joseph

    2018-05-20

    Understanding the genetic basis of complex diseases is challenging. Prior work shows that disease-related proteins do not typically function in isolation. Rather, they often interact with each other to form a network module that underlies dysfunctional mechanistic pathways. Identifying such disease modules will provide insights into a systems-level understanding of molecular mechanisms of diseases. Owing to the incompleteness of our knowledge of disease proteins and limited information on the biological mediators of pathobiological processes, the key proteins (seed proteins) for many diseases appear scattered over the human protein-protein interactome and form a few small branches, rather than coherent network modules. In this paper, we develop a network-based algorithm, called the Seed Connector algorithm (SCA), to pinpoint disease modules by adding as few additional linking proteins (seed connectors) to the seed protein pool as possible. Such seed connectors are hidden disease module elements that are critical for interpreting the functional context of disease proteins. The SCA aims to connect seed disease proteins so that disease mechanisms and pathways can be decoded based on predicted coherent network modules. We validate the algorithm using a large corpus of 70 complex diseases and binding targets of over 200 drugs, and demonstrate the biological relevance of the seed connectors. Lastly, as a specific proof of concept, we apply the SCA to a set of seed proteins for coronary artery disease derived from a meta-analysis of large-scale genome-wide association studies and obtain a coronary artery disease module enriched with important disease-related signaling pathways and drug targets not previously recognized. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. SABRE: a method for assessing the stability of gene modules in complex tissues and subject populations.

    PubMed

    Shannon, Casey P; Chen, Virginia; Takhar, Mandeep; Hollander, Zsuzsanna; Balshaw, Robert; McManus, Bruce M; Tebbutt, Scott J; Sin, Don D; Ng, Raymond T

    2016-11-14

    Gene network inference (GNI) algorithms can be used to identify sets of coordinately expressed genes, termed network modules from whole transcriptome gene expression data. The identification of such modules has become a popular approach to systems biology, with important applications in translational research. Although diverse computational and statistical approaches have been devised to identify such modules, their performance behavior is still not fully understood, particularly in complex human tissues. Given human heterogeneity, one important question is how the outputs of these computational methods are sensitive to the input sample set, or stability. A related question is how this sensitivity depends on the size of the sample set. We describe here the SABRE (Similarity Across Bootstrap RE-sampling) procedure for assessing the stability of gene network modules using a re-sampling strategy, introduce a novel criterion for identifying stable modules, and demonstrate the utility of this approach in a clinically-relevant cohort, using two different gene network module discovery algorithms. The stability of modules increased as sample size increased and stable modules were more likely to be replicated in larger sets of samples. Random modules derived from permutated gene expression data were consistently unstable, as assessed by SABRE, and provide a useful baseline value for our proposed stability criterion. Gene module sets identified by different algorithms varied with respect to their stability, as assessed by SABRE. Finally, stable modules were more readily annotated in various curated gene set databases. The SABRE procedure and proposed stability criterion may provide guidance when designing systems biology studies in complex human disease and tissues.

  19. Digital stroboscopic holographic interferometry for power flow measurements in acoustically driven membranes

    NASA Astrophysics Data System (ADS)

    Keustermans, William; Pires, Felipe; De Greef, Daniël; Vanlanduit, Steve J. A.; Dirckx, Joris J. J.

    2016-06-01

    Despite the importance of the eardrum and the ossicles in the hearing chain, it remains an open question how acoustical energy is transmitted between them. Identifying the transmission path at different frequencies could lead to valuable information for the domain of middle ear surgery. In this work a setup for stroboscopic holography is combined with an algorithm for power flow calculations. With our method we were able to accurately locate the power sources and sinks in a membrane. The setup enabled us to make amplitude maps of the out-of-plane displacement of a vibrating rubber membrane at subsequent instances of time within the vibration period. From these, the amplitude maps of the moments of force and velocities are calculated. The magnitude and phase maps are extracted from this amplitude data, and form the input for the power flow calculations. We present the algorithm used for the measurements and for the power flow calculations. Finite element models of a circular plate with a local energy source and sink allowed us to test and optimize this algorithm in a controlled way and without the present of noise, but will not be discussed below. At the setup an earphone was connected with a thin tube which was placed very close to the membrane so that sound impinges locally on the membrane, hereby acting as a local energy source. The energy sink was a little piece of foam carefully placed against the membrane. The laser pulses are fired at selected instants within the vibration period using a 30 mW HeNe continuous wave laser (red light, 632.8 nm) in combination with an acousto-optic modulator. A function generator controls the phase of these illumination pulses and the holograms are recorded using a CCD camera. We present the magnitude and phase maps as well as the power flow measurements on the rubber membrane. Calculation of the divergence of this power flow map provides a simple and fast way of identifying and locating an energy source or sink. In conclusion possible future improvements to the setup and the power flow algorithm are discussed.

  20. ClusterViz: A Cytoscape APP for Cluster Analysis of Biological Network.

    PubMed

    Wang, Jianxin; Zhong, Jiancheng; Chen, Gang; Li, Min; Wu, Fang-xiang; Pan, Yi

    2015-01-01

    Cluster analysis of biological networks is one of the most important approaches for identifying functional modules and predicting protein functions. Furthermore, visualization of clustering results is crucial to uncover the structure of biological networks. In this paper, ClusterViz, an APP of Cytoscape 3 for cluster analysis and visualization, has been developed. In order to reduce complexity and enable extendibility for ClusterViz, we designed the architecture of ClusterViz based on the framework of Open Services Gateway Initiative. According to the architecture, the implementation of ClusterViz is partitioned into three modules including interface of ClusterViz, clustering algorithms and visualization and export. ClusterViz fascinates the comparison of the results of different algorithms to do further related analysis. Three commonly used clustering algorithms, FAG-EC, EAGLE and MCODE, are included in the current version. Due to adopting the abstract interface of algorithms in module of the clustering algorithms, more clustering algorithms can be included for the future use. To illustrate usability of ClusterViz, we provided three examples with detailed steps from the important scientific articles, which show that our tool has helped several research teams do their research work on the mechanism of the biological networks.

  1. Laboratory for Engineering Man/Machine Systems (LEMS): System identification, model reduction and deconvolution filtering using Fourier based modulating signals and high order statistics

    NASA Technical Reports Server (NTRS)

    Pan, Jianqiang

    1992-01-01

    Several important problems in the fields of signal processing and model identification, such as system structure identification, frequency response determination, high order model reduction, high resolution frequency analysis, deconvolution filtering, and etc. Each of these topics involves a wide range of applications and has received considerable attention. Using the Fourier based sinusoidal modulating signals, it is shown that a discrete autoregressive model can be constructed for the least squares identification of continuous systems. Some identification algorithms are presented for both SISO and MIMO systems frequency response determination using only transient data. Also, several new schemes for model reduction were developed. Based upon the complex sinusoidal modulating signals, a parametric least squares algorithm for high resolution frequency estimation is proposed. Numerical examples show that the proposed algorithm gives better performance than the usual. Also, the problem was studied of deconvolution and parameter identification of a general noncausal nonminimum phase ARMA system driven by non-Gaussian stationary random processes. Algorithms are introduced for inverse cumulant estimation, both in the frequency domain via the FFT algorithms and in the domain via the least squares algorithm.

  2. Adaptive MCS selection and resource planning for energy-efficient communication in LTE-M based IoT sensing platform.

    PubMed

    Dao, Nhu-Ngoc; Park, Minho; Kim, Joongheon; Cho, Sungrae

    2017-01-01

    As an important part of IoTization trends, wireless sensing technologies have been involved in many fields of human life. In cellular network evolution, the long term evolution advanced (LTE-A) networks including machine-type communication (MTC) features (named LTE-M) provide a promising infrastructure for a proliferation of Internet of things (IoT) sensing platform. However, LTE-M may not be optimally exploited for directly supporting such low-data-rate devices in terms of energy efficiency since it depends on core technologies of LTE that are originally designed for high-data-rate services. Focusing on this circumstance, we propose a novel adaptive modulation and coding selection (AMCS) algorithm to address the energy consumption problem in the LTE-M based IoT-sensing platform. The proposed algorithm determines the optimal pair of MCS and the number of primary resource blocks (#PRBs), at which the transport block size is sufficient to packetize the sensing data within the minimum transmit power. In addition, a quantity-oriented resource planning (QORP) technique that utilizes these optimal MCS levels as main criteria for spectrum allocation has been proposed for better adapting to the sensing node requirements. The simulation results reveal that the proposed approach significantly reduces the energy consumption of IoT sensing nodes and #PRBs up to 23.09% and 25.98%, respectively.

  3. Adaptive MCS selection and resource planning for energy-efficient communication in LTE-M based IoT sensing platform

    PubMed Central

    Dao, Nhu-Ngoc; Park, Minho; Kim, Joongheon

    2017-01-01

    As an important part of IoTization trends, wireless sensing technologies have been involved in many fields of human life. In cellular network evolution, the long term evolution advanced (LTE-A) networks including machine-type communication (MTC) features (named LTE-M) provide a promising infrastructure for a proliferation of Internet of things (IoT) sensing platform. However, LTE-M may not be optimally exploited for directly supporting such low-data-rate devices in terms of energy efficiency since it depends on core technologies of LTE that are originally designed for high-data-rate services. Focusing on this circumstance, we propose a novel adaptive modulation and coding selection (AMCS) algorithm to address the energy consumption problem in the LTE-M based IoT-sensing platform. The proposed algorithm determines the optimal pair of MCS and the number of primary resource blocks (#PRBs), at which the transport block size is sufficient to packetize the sensing data within the minimum transmit power. In addition, a quantity-oriented resource planning (QORP) technique that utilizes these optimal MCS levels as main criteria for spectrum allocation has been proposed for better adapting to the sensing node requirements. The simulation results reveal that the proposed approach significantly reduces the energy consumption of IoT sensing nodes and #PRBs up to 23.09% and 25.98%, respectively. PMID:28796804

  4. A soft decoding algorithm and hardware implementation for the visual prosthesis based on high order soft demodulation.

    PubMed

    Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei

    2016-09-26

    High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.

  5. Distributed Generation Market Demand Model (dGen): Documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sigrin, Benjamin; Gleason, Michael; Preus, Robert

    The Distributed Generation Market Demand model (dGen) is a geospatially rich, bottom-up, market-penetration model that simulates the potential adoption of distributed energy resources (DERs) for residential, commercial, and industrial entities in the continental United States through 2050. The National Renewable Energy Laboratory (NREL) developed dGen to analyze the key factors that will affect future market demand for distributed solar, wind, storage, and other DER technologies in the United States. The new model builds off, extends, and replaces NREL's SolarDS model (Denholm et al. 2009a), which simulates the market penetration of distributed PV only. Unlike the SolarDS model, dGen can modelmore » various DER technologies under one platform--it currently can simulate the adoption of distributed solar (the dSolar module) and distributed wind (the dWind module) and link with the ReEDS capacity expansion model (Appendix C). The underlying algorithms and datasets in dGen, which improve the representation of customer decision making as well as the spatial resolution of analyses (Figure ES-1), also are improvements over SolarDS.« less

  6. Development of visual peak selection system based on multi-ISs normalization algorithm to apply to methamphetamine impurity profiling.

    PubMed

    Lee, Hun Joo; Han, Eunyoung; Lee, Jaesin; Chung, Heesun; Min, Sung-Gi

    2016-11-01

    The aim of this study is to improve resolution of impurity peaks using a newly devised normalization algorithm for multi-internal standards (ISs) and to describe a visual peak selection system (VPSS) for efficient support of impurity profiling. Drug trafficking routes, location of manufacture, or synthetic route can be identified from impurities in seized drugs. In the analysis of impurities, different chromatogram profiles are obtained from gas chromatography and used to examine similarities between drug samples. The data processing method using relative retention time (RRT) calculated by a single internal standard is not preferred when many internal standards are used and many chromatographic peaks present because of the risk of overlapping between peaks and difficulty in classifying impurities. In this study, impurities in methamphetamine (MA) were extracted by liquid-liquid extraction (LLE) method using ethylacetate containing 4 internal standards and analyzed by gas chromatography-flame ionization detection (GC-FID). The newly developed VPSS consists of an input module, a conversion module, and a detection module. The input module imports chromatograms collected from GC and performs preprocessing, which is converted with a normalization algorithm in the conversion module, and finally the detection module detects the impurities in MA samples using a visualized zoning user interface. The normalization algorithm in the conversion module was used to convert the raw data from GC-FID. The VPSS with the built-in normalization algorithm can effectively detect different impurities in samples even in complex matrices and has high resolution keeping the time sequence of chromatographic peaks the same as that of the RRT method. The system can widen a full range of chromatograms so that the peaks of impurities were better aligned for easy separation and classification. The resolution, accuracy, and speed of impurity profiling showed remarkable improvement. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. CUFID-query: accurate network querying through random walk based network flow estimation.

    PubMed

    Jeong, Hyundoo; Qian, Xiaoning; Yoon, Byung-Jun

    2017-12-28

    Functional modules in biological networks consist of numerous biomolecules and their complicated interactions. Recent studies have shown that biomolecules in a functional module tend to have similar interaction patterns and that such modules are often conserved across biological networks of different species. As a result, such conserved functional modules can be identified through comparative analysis of biological networks. In this work, we propose a novel network querying algorithm based on the CUFID (Comparative network analysis Using the steady-state network Flow to IDentify orthologous proteins) framework combined with an efficient seed-and-extension approach. The proposed algorithm, CUFID-query, can accurately detect conserved functional modules as small subnetworks in the target network that are expected to perform similar functions to the given query functional module. The CUFID framework was recently developed for probabilistic pairwise global comparison of biological networks, and it has been applied to pairwise global network alignment, where the framework was shown to yield accurate network alignment results. In the proposed CUFID-query algorithm, we adopt the CUFID framework and extend it for local network alignment, specifically to solve network querying problems. First, in the seed selection phase, the proposed method utilizes the CUFID framework to compare the query and the target networks and to predict the probabilistic node-to-node correspondence between the networks. Next, the algorithm selects and greedily extends the seed in the target network by iteratively adding nodes that have frequent interactions with other nodes in the seed network, in a way that the conductance of the extended network is maximally reduced. Finally, CUFID-query removes irrelevant nodes from the querying results based on the personalized PageRank vector for the induced network that includes the fully extended network and its neighboring nodes. Through extensive performance evaluation based on biological networks with known functional modules, we show that CUFID-query outperforms the existing state-of-the-art algorithms in terms of prediction accuracy and biological significance of the predictions.

  8. Concept development of X-ray mass thickness detection for irradiated items upon electron beam irradiation processing

    NASA Astrophysics Data System (ADS)

    Qin, Huaili; Yang, Guang; Kuang, Shan; Wang, Qiang; Liu, Jingjing; Zhang, Xiaomin; Li, Cancan; Han, Zhiwei; Li, Yuanjing

    2018-02-01

    The present project will adopt the principle and technology of X-ray imaging to quickly measure the mass thickness (wherein the mass thickness of the item =density of the item × thickness of the item) of the irradiated items and thus to determine whether the packaging size and inside location of the item will meet the requirements for treating thickness upon electron beam irradiation processing. The development of algorithm of X-ray mass thickness detector as well as the prediction of dose distribution have been completed. The development of the algorithm was based on the X-ray attenuation. 4 standard modules, Al sheet, Al ladders, PMMA sheet and PMMA ladders, were selected for the algorithm development. The algorithm was optimized until the error between tested mass thickness and standard mass thickness was less than 5%. Dose distribution of all energy (1-10 MeV) for each mass thickness was obtained using Monte-carlo method and used for the analysis of dose distribution, which provides the information of whether the item will be penetrated or not, as well as the Max. dose, Min. dose and DUR of the whole item.

  9. Algorithm for the classification of multi-modulating signals on the electrocardiogram.

    PubMed

    Mita, Mitsuo

    2007-03-01

    This article discusses the algorithm to measure electrocardiogram (ECG) and respiration simultaneously and to have the diagnostic potentiality for sleep apnoea from ECG recordings. The algorithm is composed by the combination with the three particular scale transform of a(j)(t), u(j)(t), o(j)(a(j)) and the statistical Fourier transform (SFT). Time and magnitude scale transforms of a(j)(t), u(j)(t) change the source into the periodic signal and tau(j) = o(j)(a(j)) confines its harmonics into a few instantaneous components at tau(j) being a common instant on two scales between t and tau(j). As a result, the multi-modulating source is decomposed by the SFT and is reconstructed into ECG, respiration and the other signals by inverse transform. The algorithm is expected to get the partial ventilation and the heart rate variability from scale transforms among a(j)(t), a(j+1)(t) and u(j+1)(t) joining with each modulation. The algorithm has a high potentiality of the clinical checkup for the diagnosis of sleep apnoea from ECG recordings.

  10. MINE: Module Identification in Networks

    PubMed Central

    2011-01-01

    Background Graphical models of network associations are useful for both visualizing and integrating multiple types of association data. Identifying modules, or groups of functionally related gene products, is an important challenge in analyzing biological networks. However, existing tools to identify modules are insufficient when applied to dense networks of experimentally derived interaction data. To address this problem, we have developed an agglomerative clustering method that is able to identify highly modular sets of gene products within highly interconnected molecular interaction networks. Results MINE outperforms MCODE, CFinder, NEMO, SPICi, and MCL in identifying non-exclusive, high modularity clusters when applied to the C. elegans protein-protein interaction network. The algorithm generally achieves superior geometric accuracy and modularity for annotated functional categories. In comparison with the most closely related algorithm, MCODE, the top clusters identified by MINE are consistently of higher density and MINE is less likely to designate overlapping modules as a single unit. MINE offers a high level of granularity with a small number of adjustable parameters, enabling users to fine-tune cluster results for input networks with differing topological properties. Conclusions MINE was created in response to the challenge of discovering high quality modules of gene products within highly interconnected biological networks. The algorithm allows a high degree of flexibility and user-customisation of results with few adjustable parameters. MINE outperforms several popular clustering algorithms in identifying modules with high modularity and obtains good overall recall and precision of functional annotations in protein-protein interaction networks from both S. cerevisiae and C. elegans. PMID:21605434

  11. Searching for statistically significant regulatory modules.

    PubMed

    Bailey, Timothy L; Noble, William Stafford

    2003-10-01

    The regulatory machinery controlling gene expression is complex, frequently requiring multiple, simultaneous DNA-protein interactions. The rate at which a gene is transcribed may depend upon the presence or absence of a collection of transcription factors bound to the DNA near the gene. Locating transcription factor binding sites in genomic DNA is difficult because the individual sites are small and tend to occur frequently by chance. True binding sites may be identified by their tendency to occur in clusters, sometimes known as regulatory modules. We describe an algorithm for detecting occurrences of regulatory modules in genomic DNA. The algorithm, called mcast, takes as input a DNA database and a collection of binding site motifs that are known to operate in concert. mcast uses a motif-based hidden Markov model with several novel features. The model incorporates motif-specific p-values, thereby allowing scores from motifs of different widths and specificities to be compared directly. The p-value scoring also allows mcast to only accept motif occurrences with significance below a user-specified threshold, while still assigning better scores to motif occurrences with lower p-values. mcast can search long DNA sequences, modeling length distributions between motifs within a regulatory module, but ignoring length distributions between modules. The algorithm produces a list of predicted regulatory modules, ranked by E-value. We validate the algorithm using simulated data as well as real data sets from fruitfly and human. http://meme.sdsc.edu/MCAST/paper

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plautz, Tia E.; Johnson, R. P.; Sadrozinski, H. F.-W.

    Purpose: To characterize the modulation transfer function (MTF) of the pre-clinical (phase II) head scanner developed for proton computed tomography (pCT) by the pCT collaboration. To evaluate the spatial resolution achievable by this system. Methods: Our phase II proton CT scanner prototype consists of two silicon telescopes that track individual protons upstream and downstream from a phantom, and a 5-stage scintillation detector that measures a combination of the residual energy and range of the proton. Residual energy is converted to water equivalent path length (WEPL) of the protons in the scanned object. The set of WEPL values and associated pathsmore » of protons passing through the object over a 360° angular scan is processed by an iterative parallelizable reconstruction algorithm that runs on GP-GPU hardware. A custom edge phantom composed of water-equivalent polymer and tissue-equivalent material inserts was constructed. The phantom was first simulated in Geant4 and then built to perform experimental beam tests with 200 MeV protons at the Northwestern Medicine Chicago Proton Center. The oversampling method was used to construct radial and azimuthal edge spread functions and modulation transfer functions. The spatial resolution was defined by the 10% point of the modulation transfer function in units of lp/cm. Results: The spatial resolution of the image was found to be strongly correlated with the radial position of the insert but independent of the relative stopping power of the insert. The spatial resolution varies between roughly 4 and 6 lp/cm in both the the radial and azimuthal directions depending on the radial displacement of the edge. Conclusion: The amount of image degradation due to our detector system is small compared with the effects of multiple Coulomb scattering, pixelation of the image and the reconstruction algorithm. Improvements in reconstruction will be made in order to achieve the theoretical limits of spatial resolution.« less

  13. NASA Tech Briefs, December 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics include: A Deep Space Network Portable Radio Science Receiver; Detecting Phase Boundaries in Hard-Sphere Suspensions; Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery; Very-Long-Distance Remote Hearing and Vibrometry; Using GPS to Detect Imminent Tsunamis; Stream Flow Prediction by Remote Sensing and Genetic Programming; Pilotless Frame Synchronization Using LDPC Code Constraints; Radiometer on a Chip; Measuring Luminescence Lifetime With Help of a DSP; Modulation Based on Probability Density Functions; Ku Telemetry Modulator for Suborbital Vehicles; Photonic Links for High-Performance Arraying of Antennas; Reconfigurable, Bi-Directional Flexfet Level Shifter for Low-Power, Rad-Hard Integration; Hardware-Efficient Monitoring of I/O Signals; Video System for Viewing From a Remote or Windowless Cockpit; Spacesuit Data Display and Management System; IEEE 1394 Hub With Fault Containment; Compact, Miniature MMIC Receiver Modules for an MMIC Array Spectrograph; Waveguide Transition for Submillimeter-Wave MMICs; Magnetic-Field-Tunable Superconducting Rectifier; Bonded Invar Clip Removal Using Foil Heaters; Fabricating Radial Groove Gratings Using Projection Photolithography; Gratings Fabricated on Flat Surfaces and Reproduced on Non-Flat Substrates; Method for Measuring the Volume-Scattering Function of Water; Method of Heating a Foam-Based Catalyst Bed; Small Deflection Energy Analyzer for Energy and Angular Distributions; Polymeric Bladder for Storing Liquid Oxygen; Pyrotechnic Simulator/Stray-Voltage Detector; Inventions Utilizing Microfluidics and Colloidal Particles; RuO2 Thermometer for Ultra-Low Temperatures; Ultra-Compact, High-Resolution LADAR System for 3D Imaging; Dual-Channel Multi-Purpose Telescope; Objective Lens Optimized for Wavefront Delivery, Pupil Imaging, and Pupil Ghosting; CMOS Camera Array With Onboard Memory; Quickly Approximating the Distance Between Two Objects; Processing Images of Craters for Spacecraft Navigation; Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System; Rover Slip Validation and Prediction Algorithm; Safety and Quality Training Simulator; Supply-Chain Optimization Template; Algorithm for Computing Particle/Surface Interactions; Cryogenic Pupil Alignment Test Architecture for Aberrated Pupil Images; and Thermal Transport Model for Heat Sink Design.

  14. Designing of a self-adaptive digital filter using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Geng, Xuemei; Li, Hongguang; Xu, Chi

    2018-04-01

    This paper presents a novel methodology applying non-linear model for closed loop Sigma-Delta modulator that is based on genetic algorithm, which offers opportunity to simplify the process of tuning parameters and further improve the noise performance. The proposed Sigma-Delta modulator is able to quickly and efficiently design high performance, high order, closed loop that are robust to sensor fabrication tolerances. Simulation results with respect to the proposed Sigma-Delta modulator, SNR>122dB and the noise floor under -170dB are obtained in frequency range of [5-150Hz]. In further simulation, the robustness of the proposed Sigma-Delta modulator is analyzed.

  15. Systems and methods for knowledge discovery in spatial data

    DOEpatents

    Obradovic, Zoran; Fiez, Timothy E.; Vucetic, Slobodan; Lazarevic, Aleksandar; Pokrajac, Dragoljub; Hoskinson, Reed L.

    2005-03-08

    Systems and methods are provided for knowledge discovery in spatial data as well as to systems and methods for optimizing recipes used in spatial environments such as may be found in precision agriculture. A spatial data analysis and modeling module is provided which allows users to interactively and flexibly analyze and mine spatial data. The spatial data analysis and modeling module applies spatial data mining algorithms through a number of steps. The data loading and generation module obtains or generates spatial data and allows for basic partitioning. The inspection module provides basic statistical analysis. The preprocessing module smoothes and cleans the data and allows for basic manipulation of the data. The partitioning module provides for more advanced data partitioning. The prediction module applies regression and classification algorithms on the spatial data. The integration module enhances prediction methods by combining and integrating models. The recommendation module provides the user with site-specific recommendations as to how to optimize a recipe for a spatial environment such as a fertilizer recipe for an agricultural field.

  16. Investigating ego modules and pathways in osteosarcoma by integrating the EgoNet algorithm and pathway analysis.

    PubMed

    Chen, X Y; Chen, Y H; Zhang, L J; Wang, Y; Tong, Z C

    2017-02-16

    Osteosarcoma (OS) is the most common primary bone malignancy, but current therapies are far from effective for all patients. A better understanding of the pathological mechanism of OS may help to achieve new treatments for this tumor. Hence, the objective of this study was to investigate ego modules and pathways in OS utilizing EgoNet algorithm and pathway-related analysis, and reveal pathological mechanisms underlying OS. The EgoNet algorithm comprises four steps: constructing background protein-protein interaction (PPI) network (PPIN) based on gene expression data and PPI data; extracting differential expression network (DEN) from the background PPIN; identifying ego genes according to topological features of genes in reweighted DEN; and collecting ego modules using module search by ego gene expansion. Consequently, we obtained 5 ego modules (Modules 2, 3, 4, 5, and 6) in total. After applying the permutation test, all presented statistical significance between OS and normal controls. Finally, pathway enrichment analysis combined with Reactome pathway database was performed to investigate pathways, and Fisher's exact test was conducted to capture ego pathways for OS. The ego pathway for Module 2 was CLEC7A/inflammasome pathway, while for Module 3 a tetrasaccharide linker sequence was required for glycosaminoglycan (GAG) synthesis, and for Module 6 was the Rho GTPase cycle. Interestingly, genes in Modules 4 and 5 were enriched in the same pathway, the 2-LTR circle formation. In conclusion, the ego modules and pathways might be potential biomarkers for OS therapeutic index, and give great insight of the molecular mechanism underlying this tumor.

  17. Investigating ego modules and pathways in osteosarcoma by integrating the EgoNet algorithm and pathway analysis

    PubMed Central

    Chen, X.Y.; Chen, Y.H.; Zhang, L.J.; Wang, Y.; Tong, Z.C.

    2017-01-01

    Osteosarcoma (OS) is the most common primary bone malignancy, but current therapies are far from effective for all patients. A better understanding of the pathological mechanism of OS may help to achieve new treatments for this tumor. Hence, the objective of this study was to investigate ego modules and pathways in OS utilizing EgoNet algorithm and pathway-related analysis, and reveal pathological mechanisms underlying OS. The EgoNet algorithm comprises four steps: constructing background protein-protein interaction (PPI) network (PPIN) based on gene expression data and PPI data; extracting differential expression network (DEN) from the background PPIN; identifying ego genes according to topological features of genes in reweighted DEN; and collecting ego modules using module search by ego gene expansion. Consequently, we obtained 5 ego modules (Modules 2, 3, 4, 5, and 6) in total. After applying the permutation test, all presented statistical significance between OS and normal controls. Finally, pathway enrichment analysis combined with Reactome pathway database was performed to investigate pathways, and Fisher's exact test was conducted to capture ego pathways for OS. The ego pathway for Module 2 was CLEC7A/inflammasome pathway, while for Module 3 a tetrasaccharide linker sequence was required for glycosaminoglycan (GAG) synthesis, and for Module 6 was the Rho GTPase cycle. Interestingly, genes in Modules 4 and 5 were enriched in the same pathway, the 2-LTR circle formation. In conclusion, the ego modules and pathways might be potential biomarkers for OS therapeutic index, and give great insight of the molecular mechanism underlying this tumor. PMID:28225867

  18. Beyond Gaussians: a study of single spot modeling for scanning proton dose calculation

    PubMed Central

    Li, Yupeng; Zhu, Ronald X.; Sahoo, Narayan; Anand, Aman; Zhang, Xiaodong

    2013-01-01

    Active spot scanning proton therapy is becoming increasingly adopted by proton therapy centers worldwide. Unlike passive-scattering proton therapy, active spot scanning proton therapy, especially intensity-modulated proton therapy, requires proper modeling of each scanning spot to ensure accurate computation of the total dose distribution contributed from a large number of spots. During commissioning of the spot scanning gantry at the Proton Therapy Center in Houston, it was observed that the long-range scattering protons in a medium may have been inadequately modeled for high-energy beams by a commercial treatment planning system, which could lead to incorrect prediction of field-size effects on dose output. In the present study, we developed a pencil-beam algorithm for scanning-proton dose calculation by focusing on properly modeling individual scanning spots. All modeling parameters required by the pencil-beam algorithm can be generated based solely on a few sets of measured data. We demonstrated that low-dose halos in single-spot profiles in the medium could be adequately modeled with the addition of a modified Cauchy-Lorentz distribution function to a double-Gaussian function. The field-size effects were accurately computed at all depths and field sizes for all energies, and good dose accuracy was also achieved for patient dose verification. The implementation of the proposed pencil beam algorithm also enabled us to study the importance of different modeling components and parameters at various beam energies. The results of this study may be helpful in improving dose calculation accuracy and simplifying beam commissioning and treatment planning processes for spot scanning proton therapy. PMID:22297324

  19. Representing Reservoir Stratification in Land Surface and Earth System Models

    NASA Astrophysics Data System (ADS)

    Yigzaw, W.; Li, H. Y.; Leung, L. R.; Hejazi, M. I.; Voisin, N.; Payn, R. A.; Demissie, Y.

    2017-12-01

    A one-dimensional reservoir stratification modeling has been developed as part of Model for Scale Adaptive River Transport (MOSART), which is the river transport model used in the Accelerated Climate Modeling for Energy (ACME) and Community Earth System Model (CESM). Reservoirs play an important role in modulating the dynamic water, energy and biogeochemical cycles in the riverine system through nutrient sequestration and stratification. However, most earth system models include lake models that assume a simplified geometry featuring a constant depth and a constant surface area. As reservoir geometry has important effects on thermal stratification, we developed a new algorithm for deriving generic, stratified area-elevation-storage relationships that are applicable at regional and global scales using data from Global Reservoir and Dam database (GRanD). This new reservoir geometry dataset is then used to support the development of a reservoir stratification module within MOSART. The mixing of layers (energy and mass) in the reservoir is driven by eddy diffusion, vertical advection, and reservoir inflow and outflow. Upstream inflow into a reservoir is treated as an additional source/sink of energy, while downstream outflow represented a sink. Hourly atmospheric forcing from North American Land Assimilation System (NLDAS) Phase II and simulated daily runoff by ACME land component are used as inputs for the model over the contiguous United States for simulations between 2001-2010. The model is validated using selected observed temperature profile data in a number of reservoirs that are subject to various levels of regulation. The reservoir stratification module completes the representation of riverine mass and heat transfer in earth system models, which is a major step towards quantitative understanding of human influences on the terrestrial hydrological, ecological and biogeochemical cycles.

  20. An adaptive, object oriented strategy for base calling in DNA sequence analysis.

    PubMed Central

    Giddings, M C; Brumley, R L; Haker, M; Smith, L M

    1993-01-01

    An algorithm has been developed for the determination of nucleotide sequence from data produced in fluorescence-based automated DNA sequencing instruments employing the four-color strategy. This algorithm takes advantage of object oriented programming techniques for modularity and extensibility. The algorithm is adaptive in that data sets from a wide variety of instruments and sequencing conditions can be used with good results. Confidence values are provided on the base calls as an estimate of accuracy. The algorithm iteratively employs confidence determinations from several different modules, each of which examines a different feature of the data for accurate peak identification. Modules within this system can be added or removed for increased performance or for application to a different task. In comparisons with commercial software, the algorithm performed well. Images PMID:8233787

  1. Increased Energy Delivery for Parallel Battery Packs with No Regulated Bus

    NASA Astrophysics Data System (ADS)

    Hsu, Chung-Ti

    In this dissertation, a new approach to paralleling different battery types is presented. A method for controlling charging/discharging of different battery packs by using low-cost bi-directional switches instead of DC-DC converters is proposed. The proposed system architecture, algorithms, and control techniques allow batteries with different chemistry, voltage, and SOC to be properly charged and discharged in parallel without causing safety problems. The physical design and cost for the energy management system is substantially reduced. Additionally, specific types of failures in the maximum power point tracking (MPPT) in a photovoltaic (PV) system when tracking only the load current of a DC-DC converter are analyzed. The periodic nonlinear load current will lead MPPT realized by the conventional perturb and observe (P&O) algorithm to be problematic. A modified MPPT algorithm is proposed and it still only requires typically measured signals, yet is suitable for both linear and periodic nonlinear loads. Moreover, for a modular DC-DC converter using several converters in parallel, the input power from PV panels is processed and distributed at the module level. Methods for properly implementing distributed MPPT are studied. A new approach to efficient MPPT under partial shading conditions is presented. The power stage architecture achieves fast input current change rate by combining a current-adjustable converter with a few converters operating at a constant current.

  2. A generic implementation of replica exchange with solute tempering (REST2) algorithm in NAMD for complex biophysical simulations

    NASA Astrophysics Data System (ADS)

    Jo, Sunhwan; Jiang, Wei

    2015-12-01

    Replica Exchange with Solute Tempering (REST2) is a powerful sampling enhancement algorithm of molecular dynamics (MD) in that it needs significantly smaller number of replicas but achieves higher sampling efficiency relative to standard temperature exchange algorithm. In this paper, we extend the applicability of REST2 for quantitative biophysical simulations through a robust and generic implementation in greatly scalable MD software NAMD. The rescaling procedure of force field parameters controlling REST2 "hot region" is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. Our implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Such a generic implementation facilitates seamless cooperation between REST2 and other modules of NAMD to provide enhanced sampling for complex biomolecular simulations. Three challenging applications including native REST2 simulation for peptide folding-unfolding transition, free energy perturbation/REST2 for absolute binding affinity of protein-ligand complex and umbrella sampling/REST2 Hamiltonian exchange for free energy landscape calculation were carried out on IBM Blue Gene/Q supercomputer to demonstrate efficacy of REST2 based on the present implementation.

  3. The routing, modulation level, and spectrum allocation algorithm in the virtual optical network mapping

    NASA Astrophysics Data System (ADS)

    Wang, Yunyun; Li, Hui; Liu, Yuze; Ji, Yuefeng; Li, Hongfa

    2017-10-01

    With the development of large video services and cloud computing, the network is increasingly in the form of services. In SDON, the SDN controller holds the underlying physical resource information, thus allocating the appropriate resources and bandwidth to the VON service. However, for some services that require extremely strict QoT (quality of transmission), the shortest distance path algorithm is often unable to meet the requirements because it does not take the link spectrum resources into account. And in accordance with the choice of the most unoccupied links, there may be more spectrum fragments. So here we propose a new RMLSA (the routing, modulation Level, and spectrum allocation) algorithm to reduce the blocking probability. The results show about 40% less blocking probability than the shortest-distance algorithm and the minimum usage of the spectrum priority algorithm. This algorithm is used to satisfy strict request of QoT for demands.

  4. Comparison of cyclic correlation and the wavelet method for symbol rate detection

    NASA Astrophysics Data System (ADS)

    Carr, Richard; Whitney, James

    Software defined radio (SDR) is a relatively new technology that holds a great deal of promise in the communication field in general, and, in particular the area of space communications. Tra-ditional communication systems are comprised of a transmitter and a receiver, where through prior planning and scheduling, the transmitter and receiver are pre-configured for a particu-lar communication modality. For any particular modality the radio circuitry is configured to transmit, receive, and resolve one type of modulation at a certain data rate. Traditional radio's are limited by the fact that the circuitry is fixed. Software defined radios on the other hand do not suffer from this limitation. SDR's are comprised mainly of software modules which allow them to be flexible, in that they can resolve various types of modulation types that occur at different data rates. This ability is of very high importance in space where parameters of the communications link may need to be changed due to channel fading, reduced power, or other unforeseen events. In these cases the ability to autonomously change aspects of the radio's con-figuration becomes an absolute necessity in order to maintain communications. In order for the technology to work the receiver has to be able to determine the modulation type and the data rate of the signal. The data rate of the signal is one of the first parameters to be resolved, as it is needed to find the other signal parameters such as modulation type and the signal-to-noise ratio. There are a number of algorithms that have been developed to detect or estimate the data rate of a signal. This paper will investigate two of these algorithms, namely, the cyclic correlation algorithm and a wavelet-based detection algorithm. Both of these algorithms are feature-based algorithms, meaning that they make their estimations based on certain inherent features of the signals to which they are applied. The cyclic correlation algorithm takes advan-tage of the cyclostationary nature of MPSK signals, while the wavelet-based algorithms take advantage of the fact of being able to detect transient changes in the signal, i.e., transitions from `1' to'0'. Both of these algorithms are tested under various signal-to-noise conditions to see which has the better performance, and the results are presented in this paper.

  5. Maximum power point tracking for photovoltaic applications by using two-level DC/DC boost converter

    NASA Astrophysics Data System (ADS)

    Moamaei, Parvin

    Recently, photovoltaic (PV) generation is becoming increasingly popular in industrial applications. As a renewable and alternative source of energy they feature superior characteristics such as being clean and silent along with less maintenance problems compared to other sources of the energy. In PV generation, employing a Maximum Power Point Tracking (MPPT) method is essential to obtain the maximum available solar energy. Among several proposed MPPT techniques, the Perturbation and Observation (P&O;) and Model Predictive Control (MPC) methods are adopted in this work. The components of the MPPT control system which are P&O; and MPC algorithms, PV module and high gain DC-DC boost converter are simulated in MATLAB Simulink. They are evaluated theoretically under rapidly and slowly changing of solar irradiation and temperature and their performance is shown by the simulation results, finally a comprehensive comparison is presented.

  6. Interactive smart battery storage for a PV and wind hybrid energy management control based on conservative power theory

    NASA Astrophysics Data System (ADS)

    Godoy Simões, Marcelo; Davi Curi Busarello, Tiago; Saad Bubshait, Abdullah; Harirchi, Farnaz; Antenor Pomilio, José; Blaabjerg, Frede

    2016-04-01

    This paper presents interactive smart battery-based storage (BBS) for wind generator (WG) and photovoltaic (PV) systems. The BBS is composed of an asymmetric cascaded H-bridge multilevel inverter (ACMI) with staircase modulation. The structure is parallel to the WG and PV systems, allowing the ACMI to have a reduction in power losses compared to the usual solution for storage connected at the DC-link of the converter for WG or PV systems. Moreover, the BBS is embedded with a decision algorithm running real-time energy costs, plus a battery state-of-charge manager and power quality capabilities, making the described system in this paper very interactive, smart and multifunctional. The paper describes how BBS interacts with the WG and PV and how its performance is improved. Experimental results are presented showing the efficacy of this BBS for renewable energy applications.

  7. Automatic page layout using genetic algorithms for electronic albuming

    NASA Astrophysics Data System (ADS)

    Geigel, Joe; Loui, Alexander C. P.

    2000-12-01

    In this paper, we describe a flexible system for automatic page layout that makes use of genetic algorithms for albuming applications. The system is divided into two modules, a page creator module which is responsible for distributing images amongst various album pages, and an image placement module which positions images on individual pages. Final page layouts are specified in a textual form using XML for printing or viewing over the Internet. The system makes use of genetic algorithms, a class of search and optimization algorithms that are based on the concepts of biological evolution, for generating solutions with fitness based on graphic design preferences supplied by the user. The genetic page layout algorithm has been incorporated into a web-based prototype system for interactive page layout over the Internet. The prototype system is built using client-server architecture and is implemented in java. The system described in this paper has demonstrated the feasibility of using genetic algorithms for automated page layout in albuming and web-based imaging applications. We believe that the system adequately proves the validity of the concept, providing creative layouts in a reasonable number of iterations. By optimizing the layout parameters of the fitness function, we hope to further improve the quality of the final layout in terms of user preference and computation speed.

  8. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space.

    PubMed

    Kalathil, Shaeen; Elias, Elizabeth

    2015-11-01

    This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.

  9. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space

    PubMed Central

    Kalathil, Shaeen; Elias, Elizabeth

    2014-01-01

    This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB. PMID:26644921

  10. Design of intelligent vehicle control system based on single chip microcomputer

    NASA Astrophysics Data System (ADS)

    Zhang, Congwei

    2018-06-01

    The smart car microprocessor uses the KL25ZV128VLK4 in the Freescale series of single-chip microcomputers. The image sampling sensor uses the CMOS digital camera OV7725. The obtained track data is processed by the corresponding algorithm to obtain track sideline information. At the same time, the pulse width modulation control (PWM) is used to control the motor and servo movements, and based on the digital incremental PID algorithm, the motor speed control and servo steering control are realized. In the project design, IAR Embedded Workbench IDE is used as the software development platform to program and debug the micro-control module, camera image processing module, hardware power distribution module, motor drive and servo control module, and then complete the design of the intelligent car control system.

  11. Deep frequency modulation interferometry.

    PubMed

    Gerberding, Oliver

    2015-06-01

    Laser interferometry with pm/Hz precision and multi-fringe dynamic range at low frequencies is a core technology to measure the motion of various objects (test masses) in space and ground based experiments for gravitational wave detection and geodesy. Even though available interferometer schemes are well understood, their construction remains complex, often involving, for example, the need to build quasi-monolithic optical benches with dozens of components. In recent years techniques have been investigated that aim to reduce this complexity by combining phase modulation techniques with sophisticated digital readout algorithms. This article presents a new scheme that uses strong laser frequency modulations in combination with the deep phase modulation readout algorithm to construct simpler and easily scalable interferometers.

  12. Algorithms for a very high speed universal noiseless coding module

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Yeh, Pen-Shu

    1991-01-01

    The algorithmic definitions and performance characterizations are presented for a high performance adaptive coding module. Operation of at least one of these (single chip) implementations is expected to exceed 500 Mbits/s under laboratory conditions. Operation of a companion decoding module should operate at up to half the coder's rate. The module incorporates a powerful noiseless coder for Standard Form Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers where the smaller integers are more likely than the larger ones). Performance close to data entropies can be expected over a Dynamic Range of from 1.5 to 12 to 14 bits/sample (depending on the implementation).

  13. [System design of small intellectualized ultrasound hyperthermia instrument in the LabVIEW environment].

    PubMed

    Jiang, Feng; Bai, Jingfeng; Chen, Yazhu

    2005-08-01

    Small-scale intellectualized medical instrument has attracted great attention in the field of biomedical engineering, and LabVIEW (Laboratory Virtual Instrument Engineering Workbench) provides a convenient environment for this application due to its inherent advantages. The principle and system structure of the hyperthermia instrument are presented. Type T thermocouples are employed as thermotransducers, whose amplifier consists of two stages, providing built-in ice point compensation and thus improving work stability over temperature. Control signals produced by specially designed circuit drive the programmable counter/timer 8254 chip to generate PWM (Pulse width modulation) wave, which is used as ultrasound radiation energy control signal. Subroutine design topics such as inner-tissue real time feedback temperature control algorithm, water temperature control in the ultrasound applicator are also described. In the cancer tissue temperature control subroutine, the authors exert new improvments to PID (Proportional Integral Differential) algorithm according to the specific demands of the system and achieve strict temperature control to the target tissue region. The system design and PID algorithm improvement have experimentally proved to be reliable and excellent, meeting the requirements of the hyperthermia system.

  14. A portable inspection system to estimate direct glare of various LED modules

    NASA Astrophysics Data System (ADS)

    Chen, Po-Li; Liao, Chun-Hsiang; Li, Hung-Chung; Jou, Shyh-Jye; Chen, Han-Ting; Lin, Yu-Hsin; Tang, Yu-Hsiang; Peng, Wei-Jei; Kuo, Hui-Jean; Sun, Pei-Li; Lee, Tsung-Xian

    2015-07-01

    Glare is caused by both direct and indirect light sources and discomfort glare produces visual discomfort, annoyance, or loss in visual performance and visibility. Direct glare is caused by light sources in the field of view whereas reflected glare is caused by bright reflections from polished or glossy surfaces that are reflected toward an individual. To improve visual comfort of our living environment, a portable inspection system to estimate direct glare of various commercial LED modules with the range of color temperature from 3100 K to 5300 K was developed in this study. The system utilized HDR images to obtain the illumination distribution of LED modules and was first calibrated for brightness and chromaticity and corrected with flat field, dark-corner and curvature by the installed algorithm. The index of direct glare was then automatically estimated after image capturing, and the operator can recognize the performance of LED modules and the possible effects on human being once the index was out of expecting range. In the future, we expect that the quick-response smart inspection system can be applied in several new fields and market, such as home energy diagnostics, environmental lighting and UGR monitoring and popularize it in several new fields.

  15. Interconnect fatigue design for terrestrial photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.

    1982-01-01

    The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.

  16. Interconnect fatigue design for terrestrial photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.

    1982-03-01

    The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.

  17. Health-aware Model Predictive Control of Pasteurization Plant

    NASA Astrophysics Data System (ADS)

    Karimi Pour, Fatemeh; Puig, Vicenç; Ocampo-Martinez, Carlos

    2017-01-01

    In order to optimize the trade-off between components life and energy consumption, the integration of a system health management and control modules is required. This paper proposes the integration of model predictive control (MPC) with a fatigue estimation approach that minimizes the damage of the components of a pasteurization plant. The fatigue estimation is assessed with the rainflow counting algorithm. Using data from this algorithm, a simplified model that characterizes the health of the system is developed and integrated with MPC. The MPC controller objective is modified by adding an extra criterion that takes into account the accumulated damage. But, a steady-state offset is created by adding this extra criterion. Finally, by including an integral action in the MPC controller, the steady-state error for regulation purpose is eliminated. The proposed control scheme is validated in simulation using a simulator of a utility-scale pasteurization plant.

  18. A genetic algorithm-based job scheduling model for big data analytics.

    PubMed

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  19. Generation of oscillating gene regulatory network motifs

    NASA Astrophysics Data System (ADS)

    van Dorp, M.; Lannoo, B.; Carlon, E.

    2013-07-01

    Using an improved version of an evolutionary algorithm originally proposed by François and Hakim [Proc. Natl. Acad. Sci. USAPNASA60027-842410.1073/pnas.0304532101 101, 580 (2004)], we generated small gene regulatory networks in which the concentration of a target protein oscillates in time. These networks may serve as candidates for oscillatory modules to be found in larger regulatory networks and protein interaction networks. The algorithm was run for 105 times to produce a large set of oscillating modules, which were systematically classified and analyzed. The robustness of the oscillations against variations of the kinetic rates was also determined, to filter out the least robust cases. Furthermore, we show that the set of evolved networks can serve as a database of models whose behavior can be compared to experimentally observed oscillations. The algorithm found three smallest (core) oscillators in which nonlinearities and number of components are minimal. Two of those are two-gene modules: the mixed feedback loop, already discussed in the literature, and an autorepressed gene coupled with a heterodimer. The third one is a single gene module which is competitively regulated by a monomer and a dimer. The evolutionary algorithm also generated larger oscillating networks, which are in part extensions of the three core modules and in part genuinely new modules. The latter includes oscillators which do not rely on feedback induced by transcription factors, but are purely of post-transcriptional type. Analysis of post-transcriptional mechanisms of oscillation may provide useful information for circadian clock research, as recent experiments showed that circadian rhythms are maintained even in the absence of transcription.

  20. Prior knowledge guided active modules identification: an integrated multi-objective approach.

    PubMed

    Chen, Weiqi; Liu, Jing; He, Shan

    2017-03-14

    Active module, defined as an area in biological network that shows striking changes in molecular activity or phenotypic signatures, is important to reveal dynamic and process-specific information that is correlated with cellular or disease states. A prior information guided active module identification approach is proposed to detect modules that are both active and enriched by prior knowledge. We formulate the active module identification problem as a multi-objective optimisation problem, which consists two conflicting objective functions of maximising the coverage of known biological pathways and the activity of the active module simultaneously. Network is constructed from protein-protein interaction database. A beta-uniform-mixture model is used to estimate the distribution of p-values and generate scores for activity measurement from microarray data. A multi-objective evolutionary algorithm is used to search for Pareto optimal solutions. We also incorporate a novel constraints based on algebraic connectivity to ensure the connectedness of the identified active modules. Application of proposed algorithm on a small yeast molecular network shows that it can identify modules with high activities and with more cross-talk nodes between related functional groups. The Pareto solutions generated by the algorithm provides solutions with different trade-off between prior knowledge and novel information from data. The approach is then applied on microarray data from diclofenac-treated yeast cells to build network and identify modules to elucidate the molecular mechanisms of diclofenac toxicity and resistance. Gene ontology analysis is applied to the identified modules for biological interpretation. Integrating knowledge of functional groups into the identification of active module is an effective method and provides a flexible control of balance between pure data-driven method and prior information guidance.

  1. TU-H-207A-09: An Automated Technique for Estimating Patient-Specific Regional Imparted Energy and Dose From TCM CT Exams Across 13 Protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, J; Tian, X; Segars, P

    2016-06-15

    Purpose: To develop an automated technique for estimating patient-specific regional imparted energy and dose from tube current modulated (TCM) computed tomography (CT) exams across a diverse set of head and body protocols. Methods: A library of 58 adult computational anthropomorphic extended cardiac-torso (XCAT) phantoms were used to model a patient population. A validated Monte Carlo program was used to simulate TCM CT exams on the entire library of phantoms for three head and 10 body protocols. The net imparted energy to the phantoms, normalized by dose length product (DLP), and the net tissue mass in each of the scan regionsmore » were computed. A knowledgebase containing relationships between normalized imparted energy and scanned mass was established. An automated computer algorithm was written to estimate the scanned mass from actual clinical CT exams. The scanned mass estimate, DLP of the exam, and knowledgebase were used to estimate the imparted energy to the patient. The algorithm was tested on 20 chest and 20 abdominopelvic TCM CT exams. Results: The normalized imparted energy increased with increasing kV for all protocols. However, the normalized imparted energy was relatively unaffected by the strength of the TCM. The average imparted energy was 681 ± 376 mJ for abdominopelvic exams and 274 ± 141 mJ for chest exams. Overall, the method was successful in providing patientspecific estimates of imparted energy for 98% of the cases tested. Conclusion: Imparted energy normalized by DLP increased with increasing tube potential. However, the strength of the TCM did not have a significant effect on the net amount of energy deposited to tissue. The automated program can be implemented into the clinical workflow to provide estimates of regional imparted energy and dose across a diverse set of clinical protocols.« less

  2. Underwater Acoustic Wireless Sensor Networks: Advances and Future Trends in Physical, MAC and Routing Layers

    PubMed Central

    Climent, Salvador; Sanchez, Antonio; Capella, Juan Vicente; Meratnia, Nirvana; Serrano, Juan Jose

    2014-01-01

    This survey aims to provide a comprehensive overview of the current research on underwater wireless sensor networks, focusing on the lower layers of the communication stack, and envisions future trends and challenges. It analyzes the current state-of-the-art on the physical, medium access control and routing layers. It summarizes their security threads and surveys the currently proposed studies. Current envisioned niches for further advances in underwater networks research range from efficient, low-power algorithms and modulations to intelligent, energy-aware routing and medium access control protocols. PMID:24399155

  3. Computing Interactions Of Free-Space Radiation With Matter

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Cucinotta, F. A.; Shinn, J. L.; Townsend, L. W.; Badavi, F. F.; Tripathi, R. K.; Silberberg, R.; Tsao, C. H.; Badwar, G. D.

    1995-01-01

    High Charge and Energy Transport (HZETRN) computer program computationally efficient, user-friendly package of software adressing problem of transport of, and shielding against, radiation in free space. Designed as "black box" for design engineers not concerned with physics of underlying atomic and nuclear radiation processes in free-space environment, but rather primarily interested in obtaining fast and accurate dosimetric information for design and construction of modules and devices for use in free space. Computational efficiency achieved by unique algorithm based on deterministic approach to solution of Boltzmann equation rather than computationally intensive statistical Monte Carlo method. Written in FORTRAN.

  4. Reconfigurable Robust Routing for Mobile Outreach Network

    NASA Technical Reports Server (NTRS)

    Lin, Ching-Fang

    2010-01-01

    The Reconfigurable Robust Routing for Mobile Outreach Network (R3MOO N) provides advanced communications networking technologies suitable for the lunar surface environment and applications. The R3MOON techn ology is based on a detailed concept of operations tailored for luna r surface networks, and includes intelligent routing algorithms and wireless mesh network implementation on AGNC's Coremicro Robots. The product's features include an integrated communication solution inco rporating energy efficiency and disruption-tolerance in a mobile ad h oc network, and a real-time control module to provide researchers an d engineers a convenient tool for reconfiguration, investigation, an d management.

  5. A Novel Modulation Classification Approach Using Gabor Filter Network

    PubMed Central

    Ghauri, Sajjad Ahmed; Qureshi, Ijaz Mansoor; Cheema, Tanveer Ahmed; Malik, Aqdas Naveed

    2014-01-01

    A Gabor filter network based approach is used for feature extraction and classification of digital modulated signals by adaptively tuning the parameters of Gabor filter network. Modulation classification of digitally modulated signals is done under the influence of additive white Gaussian noise (AWGN). The modulations considered for the classification purpose are PSK 2 to 64, FSK 2 to 64, and QAM 4 to 64. The Gabor filter network uses the network structure of two layers; the first layer which is input layer constitutes the adaptive feature extraction part and the second layer constitutes the signal classification part. The Gabor atom parameters are tuned using Delta rule and updating of weights of Gabor filter using least mean square (LMS) algorithm. The simulation results show that proposed novel modulation classification algorithm has high classification accuracy at low signal to noise ratio (SNR) on AWGN channel. PMID:25126603

  6. Blind equalization and automatic modulation classification based on subspace for subcarrier MPSK optical communications

    NASA Astrophysics Data System (ADS)

    Chen, Dan; Guo, Lin-yuan; Wang, Chen-hao; Ke, Xi-zheng

    2017-07-01

    Equalization can compensate channel distortion caused by channel multipath effects, and effectively improve convergent of modulation constellation diagram in optical wireless system. In this paper, the subspace blind equalization algorithm is used to preprocess M-ary phase shift keying (MPSK) subcarrier modulation signal in receiver. Mountain clustering is adopted to get the clustering centers of MPSK modulation constellation diagram, and the modulation order is automatically identified through the k-nearest neighbor (KNN) classifier. The experiment has been done under four different weather conditions. Experimental results show that the convergent of constellation diagram is improved effectively after using the subspace blind equalization algorithm, which means that the accuracy of modulation recognition is increased. The correct recognition rate of 16PSK can be up to 85% in any kind of weather condition which is mentioned in paper. Meanwhile, the correct recognition rate is the highest in cloudy and the lowest in heavy rain condition.

  7. Controlling laser driven protons acceleration using a deformable mirror at a high repetition rate

    NASA Astrophysics Data System (ADS)

    Noaman-ul-Haq, M.; Sokollik, T.; Ahmed, H.; Braenzel, J.; Ehrentraut, L.; Mirzaie, M.; Yu, L.-L.; Sheng, Z. M.; Chen, L. M.; Schnürer, M.; Zhang, J.

    2018-03-01

    We present results from a proof-of-principle experiment to optimize laser driven protons acceleration by directly feeding back its spectral information to a deformable mirror (DM) controlled by evolutionary algorithms (EAs). By irradiating a stable high-repetition rate tape driven target with ultra-intense pulses of intensities ∼1020 W/ cm2, we optimize the maximum energy of the accelerated protons with a stability of less than ∼5% fluctuations near optimum value. Moreover, due to spatio-temporal development of the sheath field, modulations in the spectrum are also observed. Particularly, a prominent narrow peak is observed with a spread of ∼15% (FWHM) at low energy part of the spectrum. These results are helpful to develop high repetition rate optimization techniques required for laser-driven ion accelerators.

  8. MPPT Algorithm Development for Laser Powered Surveillance Camera Power Supply Unit

    NASA Astrophysics Data System (ADS)

    Zhang, Yungui; Dushantha Chaminda, P. R.; Zhao, Kun; Cheng, Lin; Jiang, Yi; Peng, Kai

    2018-03-01

    Photovoltaics (PV) cells, modules which are semiconducting materials, convert light energy into electricity. Operation of a PV cell requires 3 basic features. When the light is absorbed it generate pairs of electron holes or excitons. An external circuit carrier opposite types of electrons irrespective of the source (sunlight or LASER light). The PV arrays have photovoltaic effect and the PV cells are defined as a device which has electrical characteristics: such as current, voltage and resistance. It varies when exposed to light, that the power output is depend on direct Laser-light. In this paper Laser-light to electricity by direct conversion with the use of PV cells and its concept of Band gap Energy, Series Resistance, Conversion Efficiency and Maximum Power Point Tracking (MPPT) methods [1].

  9. Monte Carlo evaluation of Acuros XB dose calculation Algorithm for intensity modulated radiation therapy of nasopharyngeal carcinoma

    NASA Astrophysics Data System (ADS)

    Yeh, Peter C. Y.; Lee, C. C.; Chao, T. C.; Tung, C. J.

    2017-11-01

    Intensity-modulated radiation therapy is an effective treatment modality for the nasopharyngeal carcinoma. One important aspect of this cancer treatment is the need to have an accurate dose algorithm dealing with the complex air/bone/tissue interface in the head-neck region to achieve the cure without radiation-induced toxicities. The Acuros XB algorithm explicitly solves the linear Boltzmann transport equation in voxelized volumes to account for the tissue heterogeneities such as lungs, bone, air, and soft tissues in the treatment field receiving radiotherapy. With the single beam setup in phantoms, this algorithm has already been demonstrated to achieve the comparable accuracy with Monte Carlo simulations. In the present study, five nasopharyngeal carcinoma patients treated with the intensity-modulated radiation therapy were examined for their dose distributions calculated using the Acuros XB in the planning target volume and the organ-at-risk. Corresponding results of Monte Carlo simulations were computed from the electronic portal image data and the BEAMnrc/DOSXYZnrc code. Analysis of dose distributions in terms of the clinical indices indicated that the Acuros XB was in comparable accuracy with Monte Carlo simulations and better than the anisotropic analytical algorithm for dose calculations in real patients.

  10. Routing and Scheduling Algorithms for WirelessHART Networks: A Survey

    PubMed Central

    Nobre, Marcelo; Silva, Ivanovitch; Guedes, Luiz Affonso

    2015-01-01

    Wireless communication is a trend nowadays for the industrial environment. A number of different technologies have emerged as solutions satisfying strict industrial requirements (e.g., WirelessHART, ISA100.11a, WIA-PA). As the industrial environment presents a vast range of applications, adopting an adequate solution for each case is vital to obtain good performance of the system. In this context, the routing and scheduling schemes associated with these technologies have a direct impact on important features, like latency and energy consumption. This situation has led to the development of a vast number of routing and scheduling schemes. In the present paper, we focus on the WirelessHART technology, emphasizing its most important routing and scheduling aspects in order to guide both end users and the developers of new algorithms. Furthermore, we provide a detailed literature review of the newest routing and scheduling techniques for WirelessHART, discussing each of their features. These routing algorithms have been evaluated in terms of their objectives, metrics, the usage of the WirelessHART structures and validation method. In addition, the scheduling algorithms were also evaluated by metrics, validation, objectives and, in addition, by multiple superframe support, as well as by the redundancy method used. Moreover, this paper briefly presents some insights into the main WirelessHART simulation modules available, in order to provide viable test platforms for the routing and scheduling algorithms. Finally, some open issues in WirelessHART routing and scheduling algorithms are discussed. PMID:25919371

  11. 10 CFR 431.222 - Definitions concerning traffic signal modules and pedestrian modules.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... (or hydraulic) characteristics that affect energy consumption, energy efficiency, water consumption... 10 Energy 3 2013-01-01 2013-01-01 false Definitions concerning traffic signal modules and pedestrian modules. 431.222 Section 431.222 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY...

  12. 10 CFR 431.222 - Definitions concerning traffic signal modules and pedestrian modules.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... (or hydraulic) characteristics that affect energy consumption, energy efficiency, water consumption... 10 Energy 3 2014-01-01 2014-01-01 false Definitions concerning traffic signal modules and pedestrian modules. 431.222 Section 431.222 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY...

  13. 10 CFR 431.222 - Definitions concerning traffic signal modules and pedestrian modules.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... (or hydraulic) characteristics that affect energy consumption, energy efficiency, water consumption... 10 Energy 3 2012-01-01 2012-01-01 false Definitions concerning traffic signal modules and pedestrian modules. 431.222 Section 431.222 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY...

  14. Design of a composite filter realizable on practical spatial light modulators

    NASA Technical Reports Server (NTRS)

    Rajan, P. K.; Ramakrishnan, Ramachandran

    1994-01-01

    Hybrid optical correlator systems use two spatial light modulators (SLM's), one at the input plane and the other at the filter plane. Currently available SLM's such as the deformable mirror device (DMD) and liquid crystal television (LCTV) SLM's exhibit arbitrarily constrained operating characteristics. The pattern recognition filters designed with the assumption that the SLM's have ideal operating characteristic may not behave as expected when implemented on the DMD or LCTV SLM's. Therefore it is necessary to incorporate the SLM constraints in the design of the filters. In this report, an iterative method is developed for the design of an unconstrained minimum average correlation energy (MACE) filter. Then using this algorithm a new approach for the design of a SLM constrained distortion invariant filter in the presence of input SLM is developed. Two different optimization algorithms are used to maximize the objective function during filter synthesis, one based on the simplex method and the other based on the Hooke and Jeeves method. Also, the simulated annealing based filter design algorithm proposed by Khan and Rajan is refined and improved. The performance of the filter is evaluated in terms of its recognition/discrimination capabilities using computer simulations and the results are compared with a simulated annealing optimization based MACE filter. The filters are designed for different LCTV SLM's operating characteristics and the correlation responses are compared. The distortion tolerance and the false class image discrimination qualities of the filter are comparable to those of the simulated annealing based filter but the new filter design takes about 1/6 of the computer time taken by the simulated annealing filter design.

  15. Programmable bandwidth management in software-defined EPON architecture

    NASA Astrophysics Data System (ADS)

    Li, Chengjun; Guo, Wei; Wang, Wei; Hu, Weisheng; Xia, Ming

    2016-07-01

    This paper proposes a software-defined EPON architecture which replaces the hardware-implemented DBA module with reprogrammable DBA module. The DBA module allows pluggable bandwidth allocation algorithms among multiple ONUs adaptive to traffic profiles and network states. We also introduce a bandwidth management scheme executed at the controller to manage the customized DBA algorithms for all date queues of ONUs. Our performance investigation verifies the effectiveness of this new EPON architecture, and numerical results show that software-defined EPONs can achieve less traffic delay and provide better support to service differentiation in comparison with traditional EPONs.

  16. Image recombination transform algorithm for superresolution structured illumination microscopy

    PubMed Central

    Zhou, Xing; Lei, Ming; Dan, Dan; Yao, Baoli; Yang, Yanlong; Qian, Jia; Chen, Guangde; Bianco, Piero R.

    2016-01-01

    Abstract. Structured illumination microscopy (SIM) is an attractive choice for fast superresolution imaging. The generation of structured illumination patterns made by interference of laser beams is broadly employed to obtain high modulation depth of patterns, while the polarizations of the laser beams must be elaborately controlled to guarantee the high contrast of interference intensity, which brings a more complex configuration for the polarization control. The emerging pattern projection strategy is much more compact, but the modulation depth of patterns is deteriorated by the optical transfer function of the optical system, especially in high spatial frequency near the diffraction limit. Therefore, the traditional superresolution reconstruction algorithm for interference-based SIM will suffer from many artifacts in the case of projection-based SIM that possesses a low modulation depth. Here, we propose an alternative reconstruction algorithm based on image recombination transform, which provides an alternative solution to address this problem even in a weak modulation depth. We demonstrated the effectiveness of this algorithm in the multicolor superresolution imaging of bovine pulmonary arterial endothelial cells in our developed projection-based SIM system, which applies a computer controlled digital micromirror device for fast fringe generation and multicolor light-emitting diodes for illumination. The merit of the system incorporated with the proposed algorithm allows for a low excitation intensity fluorescence imaging even less than 1  W/cm2, which is beneficial for the long-term, in vivo superresolved imaging of live cells and tissues. PMID:27653935

  17. Implementation of accelerometer sensor module and fall detection monitoring system based on wireless sensor network.

    PubMed

    Lee, Youngbum; Kim, Jinkwon; Son, Muntak; Lee, Myoungho

    2007-01-01

    This research implements wireless accelerometer sensor module and algorithm to determine wearer's posture, activity and fall. Wireless accelerometer sensor module uses ADXL202, 2-axis accelerometer sensor (Analog Device). And using wireless RF module, this module measures accelerometer signal and shows the signal at ;Acceloger' viewer program in PC. ADL algorithm determines posture, activity and fall that activity is determined by AC component of accelerometer signal and posture is determined by DC component of accelerometer signal. Those activity and posture include standing, sitting, lying, walking, running, etc. By the experiment for 30 subjects, the performance of implemented algorithm was assessed, and detection rate for postures, motions and subjects was calculated. Lastly, using wireless sensor network in experimental space, subject's postures, motions and fall monitoring system was implemented. By the simulation experiment for 30 subjects, 4 kinds of activity, 3 times, fall detection rate was calculated. In conclusion, this system can be application to patients and elders for activity monitoring and fall detection and also sports athletes' exercise measurement and pattern analysis. And it can be expected to common person's exercise training and just plaything for entertainment.

  18. Simulation and optimization of an experimental membrane wastewater treatment plant using computational intelligence methods.

    PubMed

    Ludwig, T; Kern, P; Bongards, M; Wolf, C

    2011-01-01

    The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.

  19. Simultaneous integrated boost to intraprostatic lesions using different energy levels of intensity-modulated radiotherapy and volumetric-arc therapy

    PubMed Central

    Sonmez, S; Erbay, G; Guler, O C; Arslan, G

    2014-01-01

    Objective: This study compared the dosimetry of volumetric-arc therapy (VMAT) and intensity-modulated radiotherapy (IMRT) with a dynamic multileaf collimator using the Monte Carlo algorithm in the treatment of prostate cancer with and without simultaneous integrated boost (SIB) at different energy levels. Methods: The data of 15 biopsy-proven prostate cancer patients were evaluated. The prescribed dose was 78 Gy to the planning target volume (PTV78) including the prostate and seminal vesicles and 86 Gy (PTV86) in 39 fractions to the intraprostatic lesion, which was delineated by MRI or MR-spectroscopy. Results: PTV dose homogeneity was better for IMRT than VMAT at all energy levels for both PTV78 and PTV86. Lower rectum doses (V30–V50) were significantly higher with SIB compared with PTV78 plans in both IMRT and VMAT plans at all energy levels. The bladder doses at high dose level (V60–V80) were significantly higher in IMRT plans with SIB at all energy levels compared with PTV78 plans, but no significant difference was observed in VMAT plans. VMAT plans resulted in a significant decrease in the mean monitor units (MUs) for 6, 10, and 15 MV energy levels both in plans with and those without SIB. Conclusion: Dose escalation to intraprostatic lesions with 86 Gy is safe without causing serious increase in organs at risk (OARs) doses. VMAT is advantageous in sparing OARs and requiring less MU than IMRT. Advances in knowledge: VMAT with SIB to intraprostatic lesion is a feasible method in treating prostate cancer. Additionally, no dosimetric advantage of higher energy is observed. PMID:24319009

  20. Stokes space modulation format classification based on non-iterative clustering algorithm for coherent optical receivers.

    PubMed

    Mai, Xiaofeng; Liu, Jie; Wu, Xiong; Zhang, Qun; Guo, Changjian; Yang, Yanfu; Li, Zhaohui

    2017-02-06

    A Stokes-space modulation format classification (MFC) technique is proposed for coherent optical receivers by using a non-iterative clustering algorithm. In the clustering algorithm, two simple parameters are calculated to help find the density peaks of the data points in Stokes space and no iteration is required. Correct MFC can be realized in numerical simulations among PM-QPSK, PM-8QAM, PM-16QAM, PM-32QAM and PM-64QAM signals within practical optical signal-to-noise ratio (OSNR) ranges. The performance of the proposed MFC algorithm is also compared with those of other schemes based on clustering algorithms. The simulation results show that good classification performance can be achieved using the proposed MFC scheme with moderate time complexity. Proof-of-concept experiments are finally implemented to demonstrate MFC among PM-QPSK/16QAM/64QAM signals, which confirm the feasibility of our proposed MFC scheme.

  1. 160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA)

    PubMed Central

    Li, Isaac TS; Shum, Warren; Truong, Kevin

    2007-01-01

    Background To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. Results In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. Conclusion This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching. PMID:17555593

  2. 160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA).

    PubMed

    Li, Isaac T S; Shum, Warren; Truong, Kevin

    2007-06-07

    To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching.

  3. Energy modulated electron therapy: Design, implementation, and evaluation of a novel method of treatment planning and delivery

    NASA Astrophysics Data System (ADS)

    Al-Yahya, Khalid

    Energy modulated electron therapy (EMET) is a promising treatment modality that has the fundamental capabilities to enhance the treatment planning and delivery of superficially located targets. Although it offers advantages over x-ray intensity modulated radiation therapy (IMRT), EMET has not been widely implemented to the same level of accuracy, automation, and clinical routine as its x-ray counterpart. This lack of implementation is attributed to the absence of a remotely automated beam shaping system as well as the deficiency in dosimetric accuracy of clinical electron pencil beam algorithms in the presence of beam modifiers and tissue heterogeneities. In this study, we present a novel technique for treatment planning and delivery of EMET. The delivery is achieved using a prototype of an automated "few leaf electron collimator" (FLEC). It consists of four copper leaves driven by stepper motors which are synchronized with the x-ray jaws in order to form a series of collimated rectangular openings or "fieldlets". Based on Monte Carlo studies, the FLEC has been designed to serve as an accessory tool to the current accelerator equipment. The FLEC was constructed and its operation was fully automated and integrated with the accelerator through an in-house assembled control unit. The control unit is a portable computer system accompanied with customized software that delivers EMET plans after acquiring them from the optimization station. EMET plans are produced based on dose volume constraints that employ Monte Carlo pre-generated and patient-specific kernels which are utilized by an in-house developed optimization algorithm. The structure of the optimization software is demonstrated. Using Monte Carlo techniques to calculate dose allows for accurate modeling of the collimation system as well as the patient heterogeneous geometry and take into account their impact on optimization. The Monte Carlo calculations were validated by comparing them against output measurements with an ionization chamber. Comparisons with measurements using nearly energy-independent radiochromic films were performed to confirm the Monte Carlo calculation accuracy for 1-D and 2-D dose distributions. We investigated the clinical significance of EMET on cancer sites that are inherently difficult to plan with IMRT. Several parameters were used to analyze treatment plans where they show that EMET provides significant overall improvements over IMRT.

  4. Unsupervised, Robust Estimation-based Clustering for Multispectral Images

    NASA Technical Reports Server (NTRS)

    Netanyahu, Nathan S.

    1997-01-01

    To prepare for the challenge of handling the archiving and querying of terabyte-sized scientific spatial databases, the NASA Goddard Space Flight Center's Applied Information Sciences Branch (AISB, Code 935) developed a number of characterization algorithms that rely on supervised clustering techniques. The research reported upon here has been aimed at continuing the evolution of some of these supervised techniques, namely the neural network and decision tree-based classifiers, plus extending the approach to incorporating unsupervised clustering algorithms, such as those based on robust estimation (RE) techniques. The algorithms developed under this task should be suited for use by the Intelligent Information Fusion System (IIFS) metadata extraction modules, and as such these algorithms must be fast, robust, and anytime in nature. Finally, so that the planner/schedule module of the IlFS can oversee the use and execution of these algorithms, all information required by the planner/scheduler must be provided to the IIFS development team to ensure the timely integration of these algorithms into the overall system.

  5. CytoCluster: A Cytoscape Plugin for Cluster Analysis and Visualization of Biological Networks.

    PubMed

    Li, Min; Li, Dongyan; Tang, Yu; Wu, Fangxiang; Wang, Jianxin

    2017-08-31

    Nowadays, cluster analysis of biological networks has become one of the most important approaches to identifying functional modules as well as predicting protein complexes and network biomarkers. Furthermore, the visualization of clustering results is crucial to display the structure of biological networks. Here we present CytoCluster, a cytoscape plugin integrating six clustering algorithms, HC-PIN (Hierarchical Clustering algorithm in Protein Interaction Networks), OH-PIN (identifying Overlapping and Hierarchical modules in Protein Interaction Networks), IPCA (Identifying Protein Complex Algorithm), ClusterONE (Clustering with Overlapping Neighborhood Expansion), DCU (Detecting Complexes based on Uncertain graph model), IPC-MCE (Identifying Protein Complexes based on Maximal Complex Extension), and BinGO (the Biological networks Gene Ontology) function. Users can select different clustering algorithms according to their requirements. The main function of these six clustering algorithms is to detect protein complexes or functional modules. In addition, BinGO is used to determine which Gene Ontology (GO) categories are statistically overrepresented in a set of genes or a subgraph of a biological network. CytoCluster can be easily expanded, so that more clustering algorithms and functions can be added to this plugin. Since it was created in July 2013, CytoCluster has been downloaded more than 9700 times in the Cytoscape App store and has already been applied to the analysis of different biological networks. CytoCluster is available from http://apps.cytoscape.org/apps/cytocluster.

  6. CytoCluster: A Cytoscape Plugin for Cluster Analysis and Visualization of Biological Networks

    PubMed Central

    Li, Min; Li, Dongyan; Tang, Yu; Wang, Jianxin

    2017-01-01

    Nowadays, cluster analysis of biological networks has become one of the most important approaches to identifying functional modules as well as predicting protein complexes and network biomarkers. Furthermore, the visualization of clustering results is crucial to display the structure of biological networks. Here we present CytoCluster, a cytoscape plugin integrating six clustering algorithms, HC-PIN (Hierarchical Clustering algorithm in Protein Interaction Networks), OH-PIN (identifying Overlapping and Hierarchical modules in Protein Interaction Networks), IPCA (Identifying Protein Complex Algorithm), ClusterONE (Clustering with Overlapping Neighborhood Expansion), DCU (Detecting Complexes based on Uncertain graph model), IPC-MCE (Identifying Protein Complexes based on Maximal Complex Extension), and BinGO (the Biological networks Gene Ontology) function. Users can select different clustering algorithms according to their requirements. The main function of these six clustering algorithms is to detect protein complexes or functional modules. In addition, BinGO is used to determine which Gene Ontology (GO) categories are statistically overrepresented in a set of genes or a subgraph of a biological network. CytoCluster can be easily expanded, so that more clustering algorithms and functions can be added to this plugin. Since it was created in July 2013, CytoCluster has been downloaded more than 9700 times in the Cytoscape App store and has already been applied to the analysis of different biological networks. CytoCluster is available from http://apps.cytoscape.org/apps/cytocluster. PMID:28858211

  7. Automatic Layout Design for Power Module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ning, Puqi; Wang, Fei; Ngo, Khai

    The layout of power modules is one of the key points in power module design, especially for high power densities, where couplings are increased. In this paper, along with the design example, an automatic design processes by using a genetic algorithm are presented. Some practical considerations and implementations are introduced in the optimization of module layout design.

  8. Focusing light through scattering media by polarization modulation based generalized digital optical phase conjugation

    NASA Astrophysics Data System (ADS)

    Yang, Jiamiao; Shen, Yuecheng; Liu, Yan; Hemphill, Ashton S.; Wang, Lihong V.

    2017-11-01

    Optical scattering prevents light from being focused through thick biological tissue at depths greater than ˜1 mm. To break this optical diffusion limit, digital optical phase conjugation (DOPC) based wavefront shaping techniques are being actively developed. Previous DOPC systems employed spatial light modulators that modulated either the phase or the amplitude of the conjugate light field. Here, we achieve optical focusing through scattering media by using polarization modulation based generalized DOPC. First, we describe an algorithm to extract the polarization map from the measured scattered field. Then, we validate the algorithm through numerical simulations and find that the focusing contrast achieved by polarization modulation is similar to that achieved by phase modulation. Finally, we build a system using an inexpensive twisted nematic liquid crystal based spatial light modulator (SLM) and experimentally demonstrate light focusing through 3-mm thick chicken breast tissue. Since the polarization modulation based SLMs are widely used in displays and are having more and more pixel counts with the prevalence of 4 K displays, these SLMs are inexpensive and valuable devices for wavefront shaping.

  9. 10 CFR 431.224 - Uniform test method for the measurement of energy consumption for traffic signal modules and...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Uniform test method for the measurement of energy consumption for traffic signal modules and pedestrian modules. 431.224 Section 431.224 Energy DEPARTMENT OF... measurement of energy consumption for traffic signal modules and pedestrian modules. (a) Scope. This section...

  10. A lateral guidance algorithm to reduce the post-aerobraking burn requirements for a lift-modulated orbital transfer vehicle. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Herman, G. C.

    1986-01-01

    A lateral guidance algorithm which controls the location of the line of intersection between the actual and desired orbital planes (the hinge line) is developed for the aerobraking phase of a lift-modulated orbital transfer vehicle. The on-board targeting algorithm associated with this lateral guidance algorithm is simple and concise which is very desirable since computation time and space are limited on an on-board flight computer. A variational equation which describes the movement of the hinge line is derived. Simple relationships between the plane error, the desired hinge line position, the position out-of-plane error, and the velocity out-of-plane error are found. A computer simulation is developed to test the lateral guidance algorithm for a variety of operating conditions. The algorithm does reduce the total burn magnitude needed to achieve the desired orbit by allowing the plane correction and perigee-raising burn to be combined in a single maneuver. The algorithm performs well under vacuum perigee dispersions, pot-hole density disturbance, and thick atmospheres. The results for many different operating conditions are presented.

  11. Design and Implementation of a Smart Home System Using Multisensor Data Fusion Technology.

    PubMed

    Hsu, Yu-Liang; Chou, Po-Huan; Chang, Hsing-Cheng; Lin, Shyan-Lung; Yang, Shih-Chin; Su, Heng-Yi; Chang, Chih-Chien; Cheng, Yuan-Sheng; Kuo, Yu-Chen

    2017-07-15

    This paper aims to develop a multisensor data fusion technology-based smart home system by integrating wearable intelligent technology, artificial intelligence, and sensor fusion technology. We have developed the following three systems to create an intelligent smart home environment: (1) a wearable motion sensing device to be placed on residents' wrists and its corresponding 3D gesture recognition algorithm to implement a convenient automated household appliance control system; (2) a wearable motion sensing device mounted on a resident's feet and its indoor positioning algorithm to realize an effective indoor pedestrian navigation system for smart energy management; (3) a multisensor circuit module and an intelligent fire detection and alarm algorithm to realize a home safety and fire detection system. In addition, an intelligent monitoring interface is developed to provide in real-time information about the smart home system, such as environmental temperatures, CO concentrations, communicative environmental alarms, household appliance status, human motion signals, and the results of gesture recognition and indoor positioning. Furthermore, an experimental testbed for validating the effectiveness and feasibility of the smart home system was built and verified experimentally. The results showed that the 3D gesture recognition algorithm could achieve recognition rates for automated household appliance control of 92.0%, 94.8%, 95.3%, and 87.7% by the 2-fold cross-validation, 5-fold cross-validation, 10-fold cross-validation, and leave-one-subject-out cross-validation strategies. For indoor positioning and smart energy management, the distance accuracy and positioning accuracy were around 0.22% and 3.36% of the total traveled distance in the indoor environment. For home safety and fire detection, the classification rate achieved 98.81% accuracy for determining the conditions of the indoor living environment.

  12. Design and Implementation of a Smart Home System Using Multisensor Data Fusion Technology

    PubMed Central

    Chou, Po-Huan; Chang, Hsing-Cheng; Lin, Shyan-Lung; Yang, Shih-Chin; Su, Heng-Yi; Chang, Chih-Chien; Cheng, Yuan-Sheng; Kuo, Yu-Chen

    2017-01-01

    This paper aims to develop a multisensor data fusion technology-based smart home system by integrating wearable intelligent technology, artificial intelligence, and sensor fusion technology. We have developed the following three systems to create an intelligent smart home environment: (1) a wearable motion sensing device to be placed on residents’ wrists and its corresponding 3D gesture recognition algorithm to implement a convenient automated household appliance control system; (2) a wearable motion sensing device mounted on a resident’s feet and its indoor positioning algorithm to realize an effective indoor pedestrian navigation system for smart energy management; (3) a multisensor circuit module and an intelligent fire detection and alarm algorithm to realize a home safety and fire detection system. In addition, an intelligent monitoring interface is developed to provide in real-time information about the smart home system, such as environmental temperatures, CO concentrations, communicative environmental alarms, household appliance status, human motion signals, and the results of gesture recognition and indoor positioning. Furthermore, an experimental testbed for validating the effectiveness and feasibility of the smart home system was built and verified experimentally. The results showed that the 3D gesture recognition algorithm could achieve recognition rates for automated household appliance control of 92.0%, 94.8%, 95.3%, and 87.7% by the 2-fold cross-validation, 5-fold cross-validation, 10-fold cross-validation, and leave-one-subject-out cross-validation strategies. For indoor positioning and smart energy management, the distance accuracy and positioning accuracy were around 0.22% and 3.36% of the total traveled distance in the indoor environment. For home safety and fire detection, the classification rate achieved 98.81% accuracy for determining the conditions of the indoor living environment. PMID:28714884

  13. Research on virtual network load balancing based on OpenFlow

    NASA Astrophysics Data System (ADS)

    Peng, Rong; Ding, Lei

    2017-08-01

    The Network based on OpenFlow technology separate the control module and data forwarding module. Global deployment of load balancing strategy through network view of control plane is fast and of high efficiency. This paper proposes a Weighted Round-Robin Scheduling algorithm for virtual network and a load balancing plan for server load based on OpenFlow. Load of service nodes and load balancing tasks distribution algorithm will be taken into account.

  14. Aerodynamic parameter estimation via Fourier modulating function techniques

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1995-01-01

    Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.

  15. Phase retrieval by coherent modulation imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Fucai; Chen, Bo; Morrison, Graeme R.

    Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less

  16. Phase retrieval by coherent modulation imaging

    DOE PAGES

    Zhang, Fucai; Chen, Bo; Morrison, Graeme R.; ...

    2016-11-18

    Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less

  17. A Flexible VHDL Floating Point Module for Control Algorithm Implementation in Space Applications

    NASA Astrophysics Data System (ADS)

    Padierna, A.; Nicoleau, C.; Sanchez, J.; Hidalgo, I.; Elvira, S.

    2012-08-01

    The implementation of control loops for space applications is an area with great potential. However, the characteristics of this kind of systems, such as its wide dynamic range of numeric values, make inadequate the use of fixed-point algorithms.However, because the generic chips available for the treatment of floating point data are, in general, not qualified to operate in space environments and the possibility of using an IP module in a FPGA/ASIC qualified for space is not viable due to the low amount of logic cells available for these type of devices, it is necessary to find a viable alternative.For these reasons, in this paper a VHDL Floating Point Module is presented. This proposal allows the design and execution of floating point algorithms with acceptable occupancy to be implemented in FPGAs/ASICs qualified for space environments.

  18. Method and algorithm for efficient calibration of compressive hyperspectral imaging system based on a liquid crystal retarder

    NASA Astrophysics Data System (ADS)

    Shecter, Liat; Oiknine, Yaniv; August, Isaac; Stern, Adrian

    2017-09-01

    Recently we presented a Compressive Sensing Miniature Ultra-spectral Imaging System (CS-MUSI)1 . This system consists of a single Liquid Crystal (LC) phase retarder as a spectral modulator and a gray scale sensor array to capture a multiplexed signal of the imaged scene. By designing the LC spectral modulator in compliance with the Compressive Sensing (CS) guidelines and applying appropriate algorithms we demonstrated reconstruction of spectral (hyper/ ultra) datacubes from an order of magnitude fewer samples than taken by conventional sensors. The LC modulator is designed to have an effective width of a few tens of micrometers, therefore it is prone to imperfections and spatial nonuniformity. In this work, we present the study of this nonuniformity and present a mathematical algorithm that allows the inference of the spectral transmission over the entire cell area from only a few calibration measurements.

  19. A fast algorithm for solving a linear feasibility problem with application to Intensity-Modulated Radiation Therapy.

    PubMed

    Herman, Gabor T; Chen, Wei

    2008-03-01

    The goal of Intensity-Modulated Radiation Therapy (IMRT) is to deliver sufficient doses to tumors to kill them, but without causing irreparable damage to critical organs. This requirement can be formulated as a linear feasibility problem. The sequential (i.e., iteratively treating the constraints one after another in a cyclic fashion) algorithm ART3 is known to find a solution to such problems in a finite number of steps, provided that the feasible region is full dimensional. We present a faster algorithm called ART3+. The idea of ART3+ is to avoid unnecessary checks on constraints that are likely to be satisfied. The superior performance of the new algorithm is demonstrated by mathematical experiments inspired by the IMRT application.

  20. Design and evaluation of basic standard encryption algorithm modules using nanosized complementary metal oxide semiconductor molecular circuits

    NASA Astrophysics Data System (ADS)

    Masoumi, Massoud; Raissi, Farshid; Ahmadian, Mahmoud; Keshavarzi, Parviz

    2006-01-01

    We are proposing that the recently proposed semiconductor-nanowire-molecular architecture (CMOL) is an optimum platform to realize encryption algorithms. The basic modules for the advanced encryption standard algorithm (Rijndael) have been designed using CMOL architecture. The performance of this design has been evaluated with respect to chip area and speed. It is observed that CMOL provides considerable improvement over implementation with regular CMOS architecture even with a 20% defect rate. Pseudo-optimum gate placement and routing are provided for Rijndael building blocks and the possibility of designing high speed, attack tolerant and long key encryptions are discussed.

  1. Proposal of an Algorithm to Synthesize Music Suitable for Dance

    NASA Astrophysics Data System (ADS)

    Morioka, Hirofumi; Nakatani, Mie; Nishida, Shogo

    This paper proposes an algorithm for synthesizing music suitable for emotions in moving pictures. Our goal is to support multi-media content creation; web page design, animation films and so on. Here we adopt a human dance as a moving picture to examine the availability of our method. Because we think the dance image has high affinity with music. This algorithm is composed of three modules. The first is the module for computing emotions from an input dance image, the second is for computing emotions from music in the database and the last is for selecting music suitable for input dance via an interface of emotion.

  2. A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node

    PubMed Central

    Cai, Zhipeng; Zou, Fumin; Zhang, Xiangyu

    2018-01-01

    Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption. PMID:29599945

  3. A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node.

    PubMed

    Luo, Kan; Cai, Zhipeng; Du, Keqin; Zou, Fumin; Zhang, Xiangyu; Li, Jianqing

    2018-01-01

    Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.

  4. Prior knowledge based mining functional modules from Yeast PPI networks with gene ontology

    PubMed Central

    2010-01-01

    Background In the literature, there are fruitful algorithmic approaches for identification functional modules in protein-protein interactions (PPI) networks. Because of accumulation of large-scale interaction data on multiple organisms and non-recording interaction data in the existing PPI database, it is still emergent to design novel computational techniques that can be able to correctly and scalably analyze interaction data sets. Indeed there are a number of large scale biological data sets providing indirect evidence for protein-protein interaction relationships. Results The main aim of this paper is to present a prior knowledge based mining strategy to identify functional modules from PPI networks with the aid of Gene Ontology. Higher similarity value in Gene Ontology means that two gene products are more functionally related to each other, so it is better to group such gene products into one functional module. We study (i) to encode the functional pairs into the existing PPI networks; and (ii) to use these functional pairs as pairwise constraints to supervise the existing functional module identification algorithms. Topology-based modularity metric and complex annotation in MIPs will be used to evaluate the identified functional modules by these two approaches. Conclusions The experimental results on Yeast PPI networks and GO have shown that the prior knowledge based learning methods perform better than the existing algorithms. PMID:21172053

  5. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform

    DTIC Science & Technology

    2018-01-01

    ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a

  6. Control algorithms for dynamic windows for residential buildings

    DOE PAGES

    Firlag, Szymon; Yazdanian, Mehrangiz; Curcija, Charlie; ...

    2015-09-30

    This study analyzes the influence of control algorithms for dynamic windows on energy consumption, number of hours of retracted shades during daylight and shade operations. Five different control algorithms - heating/cooling, simple rules, perfect citizen, heat flow and predictive weather were developed and compared. The performance of a typical residential building was modeled with EnergyPlus. The program Widow was used to generate a Bi-Directional Distribution Function (BSDF) for two window configurations. The BSDF was exported to EnergyPlus using the IDF file format. The EMS feature in EnergyPlus was used to develop custom control algorithms. The calculations were made for fourmore » locations with diverse climate. The results showed that: (a) use of automated shading with proposed control algorithms can reduce the site energy in the range of 11.6-13.0%; in regard to source (primary) energy in the range of 20.1-21.6%, (b) the differences between algorithms in regard to energy savings are not high, (c) the differences between algorithms in regard to number of hours of retracted shades are visible, (e) the control algorithms have a strong influence on shade operation and oscillation of shade can occur, (d) additional energy consumption caused by motor, sensors and a small microprocessor in the analyzed case is very small.« less

  7. Intra-pulse modulation recognition using short-time ramanujan Fourier transform spectrogram

    NASA Astrophysics Data System (ADS)

    Ma, Xiurong; Liu, Dan; Shan, Yunlong

    2017-12-01

    Intra-pulse modulation recognition under negative signal-to-noise ratio (SNR) environment is a research challenge. This article presents a robust algorithm for the recognition of 5 types of radar signals with large variation range in the signal parameters in low SNR using the combination of the Short-time Ramanujan Fourier transform (ST-RFT) and pseudo-Zernike moments invariant features. The ST-RFT provides the time-frequency distribution features for 5 modulations. The pseudo-Zernike moments provide invariance properties that are able to recognize different modulation schemes on different parameter variation conditions from the ST-RFT spectrograms. Simulation results demonstrate that the proposed algorithm achieves the probability of successful recognition (PSR) of over 90% when SNR is above -5 dB with large variation range in the signal parameters: carrier frequency (CF) for all considered signals, hop size (HS) for frequency shift keying (FSK) signals, and the time-bandwidth product for Linear Frequency Modulation (LFM) signals.

  8. Electronics design of the airborne stabilized platform attitude acquisition module

    NASA Astrophysics Data System (ADS)

    Xu, Jiang; Wei, Guiling; Cheng, Yong; Li, Baolin; Bu, Hongyi; Wang, Hao; Zhang, Zhanwei; Li, Xingni

    2014-02-01

    We present an attitude acquisition module electronics design for the airborne stabilized platform. The design scheme, which is based on Integrated MEMS sensor ADIS16405, develops the attitude information processing algorithms and the hardware circuit. The hardware circuits with a small volume of only 44.9 x 43.6 x 24.6 mm3, has the characteristics of lightweight, modularization and digitalization. The interface design of the PC software uses the combination plane chart with track line to receive the attitude information and display. Attitude calculation uses the Kalman filtering algorithm to improve the measurement accuracy of the module in the dynamic environment.

  9. SSL: A software specification language

    NASA Technical Reports Server (NTRS)

    Austin, S. L.; Buckles, B. P.; Ryan, J. P.

    1976-01-01

    SSL (Software Specification Language) is a new formalism for the definition of specifications for software systems. The language provides a linear format for the representation of the information normally displayed in a two-dimensional module inter-dependency diagram. In comparing SSL to FORTRAN or ALGOL, it is found to be largely complementary to the algorithmic (procedural) languages. SSL is capable of representing explicitly module interconnections and global data flow, information which is deeply imbedded in the algorithmic languages. On the other hand, SSL is not designed to depict the control flow within modules. The SSL level of software design explicitly depicts intermodule data flow as a functional specification.

  10. FPGA Vision Data Architecture

    NASA Technical Reports Server (NTRS)

    Morfopoulos, Arin C.; Pham, Thang D.

    2013-01-01

    JPL has produced a series of FPGA (field programmable gate array) vision algorithms that were written with custom interfaces to get data in and out of each vision module. Each module has unique requirements on the data interface, and further vision modules are continually being developed, each with their own custom interfaces. Each memory module had also been designed for direct access to memory or to another memory module.

  11. ISLE (Image and Signal Processing LISP Environment) reference manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sherwood, R.J.; Searfus, R.M.

    1990-01-01

    ISLE is a rapid prototyping system for performing image and signal processing. It is designed to meet the needs of a person doing development of image and signal processing algorithms in a research environment. The image and signal processing modules in ISLE form a very capable package in themselves. They also provide a rich environment for quickly and easily integrating user-written software modules into the package. ISLE is well suited to applications in which there is a need to develop a processing algorithm in an interactive manner. It is straightforward to develop the algorithms, load it into ISLE, apply themore » algorithm to an image or signal, display the results, then modify the algorithm and repeat the develop-load-apply-display cycle. ISLE consists of a collection of image and signal processing modules integrated into a cohesive package through a standard command interpreter. ISLE developer elected to concentrate their effort on developing image and signal processing software rather than developing a command interpreter. A COMMON LISP interpreter was selected for the command interpreter because it already has the features desired in a command interpreter, it supports dynamic loading of modules for customization purposes, it supports run-time parameter and argument type checking, it is very well documented, and it is a commercially supported product. This manual is intended to be a reference manual for the ISLE functions The functions are grouped into a number of categories and briefly discussed in the Function Summary chapter. The full descriptions of the functions and all their arguments are given in the Function Descriptions chapter. 6 refs.« less

  12. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1982-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  13. An improved algorithm for evaluating trellis phase codes

    NASA Technical Reports Server (NTRS)

    Mulligan, M. G.; Wilson, S. G.

    1984-01-01

    A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.

  14. Increasing signal processing sophistication in the calculation of the respiratory modulation of the photoplethysmogram (DPOP).

    PubMed

    Addison, Paul S; Wang, Rui; Uribe, Alberto A; Bergese, Sergio D

    2015-06-01

    DPOP (∆POP or Delta-POP) is a non-invasive parameter which measures the strength of respiratory modulations present in the pulse oximetry photoplethysmogram (pleth) waveform. It has been proposed as a non-invasive surrogate parameter for pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. Many groups have reported on the DPOP parameter and its correlation with PPV using various semi-automated algorithmic implementations. The study reported here demonstrates the performance gains made by adding increasingly sophisticated signal processing components to a fully automated DPOP algorithm. A DPOP algorithm was coded and its performance systematically enhanced through a series of code module alterations and additions. Each algorithm iteration was tested on data from 20 mechanically ventilated OR patients. Correlation coefficients and ROC curve statistics were computed at each stage. For the purposes of the analysis we split the data into a manually selected 'stable' region subset of the data containing relatively noise free segments and a 'global' set incorporating the whole data record. Performance gains were measured in terms of correlation against PPV measurements in OR patients undergoing controlled mechanical ventilation. Through increasingly advanced pre-processing and post-processing enhancements to the algorithm, the correlation coefficient between DPOP and PPV improved from a baseline value of R = 0.347 to R = 0.852 for the stable data set, and, correspondingly, R = 0.225 to R = 0.728 for the more challenging global data set. Marked gains in algorithm performance are achievable for manually selected stable regions of the signals using relatively simple algorithm enhancements. Significant additional algorithm enhancements, including a correction for low perfusion values, were required before similar gains were realised for the more challenging global data set.

  15. Architecture and Implementation of OpenPET Firmware and Embedded Software

    PubMed Central

    Abu-Nimeh, Faisal T.; Ito, Jennifer; Moses, William W.; Peng, Qiyu; Choong, Woon-Seng

    2016-01-01

    OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the flexibility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics – a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures and implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration. PMID:27110034

  16. Expectation maximization for hard X-ray count modulation profiles

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.

    2013-07-01

    Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.

  17. Task-discriminative space-by-time factorization of muscle activity

    PubMed Central

    Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien

    2015-01-01

    Movement generation has been hypothesized to rely on a modular organization of muscle activity. Crucial to this hypothesis is the ability to perform reliably a variety of motor tasks by recruiting a limited set of modules and combining them in a task-dependent manner. Thus far, existing algorithms that extract putative modules of muscle activations, such as Non-negative Matrix Factorization (NMF), identify modular decompositions that maximize the reconstruction of the recorded EMG data. Typically, the functional role of the decompositions, i.e., task accomplishment, is only assessed a posteriori. However, as motor actions are defined in task space, we suggest that motor modules should be computed in task space too. In this study, we propose a new module extraction algorithm, named DsNM3F, that uses task information during the module identification process. DsNM3F extends our previous space-by-time decomposition method (the so-called sNM3F algorithm, which could assess task performance only after having computed modules) to identify modules gauging between two complementary objectives: reconstruction of the original data and reliable discrimination of the performed tasks. We show that DsNM3F recovers the task dependence of module activations more accurately than sNM3F. We also apply it to electromyographic signals recorded during performance of a variety of arm pointing tasks and identify spatial and temporal modules of muscle activity that are highly consistent with previous studies. DsNM3F achieves perfect task categorization without significant loss in data approximation when task information is available and generalizes as well as sNM3F when applied to new data. These findings suggest that the space-by-time decomposition of muscle activity finds robust task-discriminating modular representations of muscle activity and that the insertion of task discrimination objectives is useful for describing the task modulation of module recruitment. PMID:26217213

  18. Task-discriminative space-by-time factorization of muscle activity.

    PubMed

    Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien

    2015-01-01

    Movement generation has been hypothesized to rely on a modular organization of muscle activity. Crucial to this hypothesis is the ability to perform reliably a variety of motor tasks by recruiting a limited set of modules and combining them in a task-dependent manner. Thus far, existing algorithms that extract putative modules of muscle activations, such as Non-negative Matrix Factorization (NMF), identify modular decompositions that maximize the reconstruction of the recorded EMG data. Typically, the functional role of the decompositions, i.e., task accomplishment, is only assessed a posteriori. However, as motor actions are defined in task space, we suggest that motor modules should be computed in task space too. In this study, we propose a new module extraction algorithm, named DsNM3F, that uses task information during the module identification process. DsNM3F extends our previous space-by-time decomposition method (the so-called sNM3F algorithm, which could assess task performance only after having computed modules) to identify modules gauging between two complementary objectives: reconstruction of the original data and reliable discrimination of the performed tasks. We show that DsNM3F recovers the task dependence of module activations more accurately than sNM3F. We also apply it to electromyographic signals recorded during performance of a variety of arm pointing tasks and identify spatial and temporal modules of muscle activity that are highly consistent with previous studies. DsNM3F achieves perfect task categorization without significant loss in data approximation when task information is available and generalizes as well as sNM3F when applied to new data. These findings suggest that the space-by-time decomposition of muscle activity finds robust task-discriminating modular representations of muscle activity and that the insertion of task discrimination objectives is useful for describing the task modulation of module recruitment.

  19. A Novel Dynamic Physical Layer Impairment-Aware Routing and Wavelength Assignment (PLI-RWA) Algorithm for Mixed Line Rate (MLR) Wavelength Division Multiplexed (WDM) Optical Networks

    NASA Astrophysics Data System (ADS)

    Iyer, Sridhar

    2016-12-01

    The ever-increasing global Internet traffic will inevitably lead to a serious upgrade of the current optical networks' capacity. The legacy infrastructure can be enhanced not only by increasing the capacity but also by adopting advance modulation formats, having increased spectral efficiency at higher data rate. In a transparent mixed-line-rate (MLR) optical network, different line rates, on different wavelengths, can coexist on the same fiber. Migration to data rates higher than 10 Gbps requires the implementation of phase modulation schemes. However, the co-existing on-off keying (OOK) channels cause critical physical layer impairments (PLIs) to the phase modulated channels, mainly due to cross-phase modulation (XPM), which in turn limits the network's performance. In order to mitigate this effect, a more sophisticated PLI-Routing and Wavelength Assignment (PLI-RWA) scheme needs to be adopted. In this paper, we investigate the critical impairment for each data rate and the way it affects the quality of transmission (QoT). In view of the aforementioned, we present a novel dynamic PLI-RWA algorithm for MLR optical networks. The proposed algorithm is compared through simulations with the shortest path and minimum hop routing schemes. The simulation results show that performance of the proposed algorithm is better than the existing schemes.

  20. High-dynamic range imaging techniques based on both color-separation algorithms used in conventional graphic arts and the human visual perception modeling

    NASA Astrophysics Data System (ADS)

    Lo, Mei-Chun; Hsieh, Tsung-Hsien; Perng, Ruey-Kuen; Chen, Jiong-Qiao

    2010-01-01

    The aim of this research is to derive illuminant-independent type of HDR imaging modules which can optimally multispectrally reconstruct of every color concerned in high-dynamic-range of original images for preferable cross-media color reproduction applications. Each module, based on either of broadband and multispectral approach, would be incorporated models of perceptual HDR tone-mapping, device characterization. In this study, an xvYCC format of HDR digital camera was used to capture HDR scene images for test. A tone-mapping module was derived based on a multiscale representation of the human visual system and used equations similar to a photoreceptor adaptation equation, proposed by Michaelis-Menten. Additionally, an adaptive bilateral type of gamut mapping algorithm, using approach of a multiple conversing-points (previously derived), was incorporated with or without adaptive Un-sharp Masking (USM) to carry out the optimization of HDR image rendering. An LCD with standard color space of Adobe RGB (D65) was used as a soft-proofing platform to display/represent HDR original RGB images, and also evaluate both renditionquality and prediction-performance of modules derived. Also, another LCD with standard color space of sRGB was used to test gamut-mapping algorithms, used to be integrated with tone-mapping module derived.

  1. A novel algorithm for Bluetooth ECG.

    PubMed

    Pandya, Utpal T; Desai, Uday B

    2012-11-01

    In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.

  2. Container-code recognition system based on computer vision and deep neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  3. Energy Conservation Curriculum for Secondary and Post-Secondary Students. Module 9: Human Comfort and Energy Conservation.

    ERIC Educational Resources Information Center

    Navarro Coll., Corsicana, TX.

    This module is the ninth in a series of eleven modules in an energy conservation curriculum for secondary and postsecondary vocational students. It is designed for use by itself or as part of a sequence of four modules on energy conservation in building construction and operation (see also modules 8, 10, and 11). The objective of this module is to…

  4. Automatic control algorithm effects on energy production

    NASA Technical Reports Server (NTRS)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  5. Compensation for electrical converter nonlinearities

    DOEpatents

    Perisic, Milun; Ransom, Ray M; Kajouke, Lateef A

    2013-11-19

    Systems and methods are provided for delivering energy from an input interface to an output interface. An electrical system includes an input interface, an output interface, an energy conversion module between the input interface and the output interface, an inductive element between the input interface and the energy conversion module, and a control module. The control module determines a compensated duty cycle control value for operating the energy conversion module to produce a desired voltage at the output interface and operates the energy conversion module to deliver energy to the output interface with a duty cycle that is influenced by the compensated duty cycle control value. The compensated duty cycle control value is influenced by the current through the inductive element and accounts for voltage across the switching elements of the energy conversion module.

  6. Energy-saving EPON Bandwidth Allocation Algorithm Supporting ONU's Sleep Mode

    NASA Astrophysics Data System (ADS)

    Zhang, Yinfa; Ren, Shuai; Liao, Xiaomin; Fang, Yuanyuan

    2014-09-01

    A new bandwidth allocation algorithm was presented by combining merits of the IPACT algorithm and the cyclic DBA algorithm based on the DBA algorithm for ONU's sleep mode. Simulation results indicate that compared with the normal mode ONU, the ONU's sleep mode can save about 74% of energy. The new algorithm has a smaller average packet delay and queue length in the upstream direction. While in the downstream direction, the average packet delay of the new algorithm is less than polling cycle Tcycle and the average queue length is less than the product of Tcycle and the maximum link rate. The new algorithm achieves a better compromise between energy-saving and ensuring quality of service.

  7. Intelligent Predictor of Energy Expenditure with the Use of Patch-Type Sensor Module

    PubMed Central

    Li, Meina; Kwak, Keun-Chang; Kim, Youn-Tae

    2012-01-01

    This paper is concerned with an intelligent predictor of energy expenditure (EE) using a developed patch-type sensor module for wireless monitoring of heart rate (HR) and movement index (MI). For this purpose, an intelligent predictor is designed by an advanced linguistic model (LM) with interval prediction based on fuzzy granulation that can be realized by context-based fuzzy c-means (CFCM) clustering. The system components consist of a sensor board, the rubber case, and the communication module with built-in analysis algorithm. This sensor is patched onto the user's chest to obtain physiological data in indoor and outdoor environments. The prediction performance was demonstrated by root mean square error (RMSE). The prediction performance was obtained as the number of contexts and clusters increased from 2 to 6, respectively. Thirty participants were recruited from Chosun University to take part in this study. The data sets were recorded during normal walking, brisk walking, slow running, and jogging in an outdoor environment and treadmill running in an indoor environment, respectively. We randomly divided the data set into training (60%) and test data set (40%) in the normalized space during 10 iterations. The training data set is used for model construction, while the test set is used for model validation. The experimental results revealed that the prediction error on treadmill running simulation was improved by about 51% and 12% in comparison to conventional LM for training and checking data set, respectively. PMID:23202166

  8. Model-based optimization of near-field binary-pixelated beam shapers

    DOE PAGES

    Dorrer, C.; Hassett, J.

    2017-01-23

    The optimization of components that rely on spatially dithered distributions of transparent or opaque pixels and an imaging system with far-field filtering for transmission control is demonstrated. The binary-pixel distribution can be iteratively optimized to lower an error function that takes into account the design transmission and the characteristics of the required far-field filter. Simulations using a design transmission chosen in the context of high-energy lasers show that the beam-fluence modulation at an image plane can be reduced by a factor of 2, leading to performance similar to using a non-optimized spatial-dithering algorithm with pixels of size reduced by amore » factor of 2 without the additional fabrication complexity or cost. The optimization process preserves the pixel distribution statistical properties. Analysis shows that the optimized pixel distribution starting from a high-noise distribution defined by a random-draw algorithm should be more resilient to fabrication errors than the optimized pixel distributions starting from a low-noise, error-diffusion algorithm, while leading to similar beamshaping performance. Furthermore, this is confirmed by experimental results obtained with various pixel distributions and induced fabrication errors.« less

  9. Reference-free automatic quality assessment of tracheoesophageal speech.

    PubMed

    Huang, Andy; Falk, Tiago H; Chan, Wai-Yip; Parsa, Vijay; Doyle, Philip

    2009-01-01

    Evaluation of the quality of tracheoesophageal (TE) speech using machines instead of human experts can enhance the voice rehabilitation process for patients who have undergone total laryngectomy and voice restoration. Towards the goal of devising a reference-free TE speech quality estimation algorithm, we investigate the efficacy of speech signal features that are used in standard telephone-speech quality assessment algorithms, in conjunction with a recently introduced speech modulation spectrum measure. Tests performed on two TE speech databases demonstrate that the modulation spectral measure and a subset of features in the standard ITU-T P.563 algorithm estimate TE speech quality with better correlation (up to 0.9) than previously proposed features.

  10. NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface.

    PubMed

    Ingebrigtsen, Trond S; Toxvaerd, Søren; Heilmann, Ole J; Schrøder, Thomas B; Dyre, Jeppe C

    2011-09-14

    An algorithm is derived for computer simulation of geodesics on the constant-potential-energy hypersurface of a system of N classical particles. First, a basic time-reversible geodesic algorithm is derived by discretizing the geodesic stationarity condition and implementing the constant-potential-energy constraint via standard Lagrangian multipliers. The basic NVU algorithm is tested by single-precision computer simulations of the Lennard-Jones liquid. Excellent numerical stability is obtained if the force cutoff is smoothed and the two initial configurations have identical potential energy within machine precision. Nevertheless, just as for NVE algorithms, stabilizers are needed for very long runs in order to compensate for the accumulation of numerical errors that eventually lead to "entropic drift" of the potential energy towards higher values. A modification of the basic NVU algorithm is introduced that ensures potential-energy and step-length conservation; center-of-mass drift is also eliminated. Analytical arguments confirmed by simulations demonstrate that the modified NVU algorithm is absolutely stable. Finally, we present simulations showing that the NVU algorithm and the standard leap-frog NVE algorithm have identical radial distribution functions for the Lennard-Jones liquid. © 2011 American Institute of Physics

  11. Towards an Improved Represenation of Reservoirs and Water Management in a Land Surface-Hydrology Model

    NASA Astrophysics Data System (ADS)

    Yassin, F.; Anis, M. R.; Razavi, S.; Wheater, H. S.

    2017-12-01

    Water management through reservoirs, diversions, and irrigation have significantly changed river flow regimes and basin-wide energy and water balance cycles. Failure to represent these effects limits the performance of land surface-hydrology models not only for streamflow prediction but also for the estimation of soil moisture, evapotranspiration, and feedbacks to the atmosphere. Despite recent research to improve the representation of water management in land surface models, there remains a need to develop improved modeling approaches that work in complex and highly regulated basins such as the 406,000 km2 Saskatchewan River Basin (SaskRB). A particular challenge for regional and global application is a lack of local information on reservoir operational management. To this end, we implemented a reservoir operation, water abstraction, and irrigation algorithm in the MESH land surface-hydrology model and tested it over the SaskRB. MESH is Environment Canada's Land Surface-hydrology modeling system that couples Canadian Land Surface Scheme (CLASS) with hydrological routing model. The implemented reservoir algorithm uses an inflow-outflow relationship that accounts for the physical characteristics of reservoirs (e.g., storage-area-elevation relationships) and includes simplified operational characteristics based on local information (e.g., monthly target volume and release under limited, normal, and flood storage zone). The irrigation algorithm uses the difference between actual and potential evapotranspiration to estimate irrigation water demand. This irrigation demand is supplied from the neighboring reservoirs/diversion in the river system. We calibrated the model enabled with the new reservoir and irrigation modules in a multi-objective optimization setting. Results showed that the reservoir and irrigation modules significantly improved the MESH model performance in generating streamflow and evapotranspiration across the SaskRB and that this our approach provides a basis for improved large scale hydrological modelling.

  12. Ultra-low power sensor for autonomous non-invasive voltage measurement in IoT solutions for energy efficiency

    NASA Astrophysics Data System (ADS)

    Villani, Clemente; Balsamo, Domenico; Brunelli, Davide; Benini, Luca

    2015-05-01

    Monitoring current and voltage waveforms is fundamental to assess the power consumption of a system and to improve its energy efficiency. In this paper we present a smart meter for power consumption which does not need any electrical contact with the load or its conductors, and which can measure both current and voltage. Power metering becomes easier and safer and it is also self-sustainable because an energy harvesting module based on inductive coupling powers the entire device from the output of the current sensor. A low cost 32-bit wireless CPU architecture is used for data filtering and processing, while a wireless transceiver sends data via the IEEE 802.15.4 standard. We describe in detail the innovative contact-less voltage measurement system, which is based on capacitive coupling and on an algorithm that exploits two pre-processing channels. The system self-calibrates to perform precise measurements regardless the cable type. Experimental results demonstrate accuracy in comparison with commercial high-cost instruments, showing negligible deviations.

  13. The ranking algorithm of the Coach browser for the UMLS metathesaurus.

    PubMed Central

    Harbourt, A. M.; Syed, E. J.; Hole, W. T.; Kingsland, L. C.

    1993-01-01

    This paper presents the novel ranking algorithm of the Coach Metathesaurus browser which is a major module of the Coach expert search refinement program. An example shows how the ranking algorithm can assist in creating a list of candidate terms useful in augmenting a suboptimal Grateful Med search of MEDLINE. PMID:8130570

  14. Inherent smoothness of intensity patterns for intensity modulated radiation therapy generated by simultaneous projection algorithms

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.

    2004-07-01

    The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.

  15. An improved immune algorithm for optimizing the pulse width modulation control sequence of inverters

    NASA Astrophysics Data System (ADS)

    Sheng, L.; Qian, S. Q.; Ye, Y. Q.; Wu, Y. H.

    2017-09-01

    In this article, an improved immune algorithm (IIA), based on the fundamental principles of the biological immune system, is proposed for optimizing the pulse width modulation (PWM) control sequence of a single-phase full-bridge inverter. The IIA takes advantage of the receptor editing and adaptive mutation mechanisms of the immune system to develop two operations that enhance the population diversity and convergence of the proposed algorithm. To verify the effectiveness and examine the performance of the IIA, 17 cases are considered, including fixed and disturbed resistances. Simulation results show that the IIA is able to obtain an effective PWM control sequence. Furthermore, when compared with existing immune algorithms (IAs), genetic algorithms (GAs), a non-traditional GA, simplified simulated annealing, and a generalized Hopfield neural network method, the IIA can achieve small total harmonic distortion (THD) and large magnitude. Meanwhile, a non-parametric test indicates that the IIA is significantly better than most comparison algorithms. Supplemental data for this article can be accessed at http://dx.doi.org/10.1080/0305215X.2016.1250894.

  16. Piezoelectric energy harvester under parquet floor

    NASA Astrophysics Data System (ADS)

    Bischur, E.; Schwesinger, N.

    2011-03-01

    The design, fabrication and testing of piezoelectric energy harvesting modules for floors is described. These modules are used beneath a parquet floor to harvest the energy of people walking over it. The harvesting modules consist of monoaxial stretched PVDF-foils. Multilayer modules are built up as roller-type capacitors. The fabrication process of the harvesting modules is simple and very suitable for mass production. Due to the use of organic polymers, the modules are characterized by a great flexibility and the possibility to create them in almost any geometrical size. The energy yield was determined depending on the dynamic loading force, the thickness of piezoelectric active material, the size of the piezoelectric modules, their alignment in the walking direction and their position on the floor. An increase of the energy yield at higher loading forces and higher thicknesses of the modules was observed. It was possible to generate up to 2.1mWs of electric energy with dynamic loads of 70kg using a specific module design. Furthermore a test floor was assembled to determine the influence of the size, alignment and position of the modules on the energy yield.

  17. The research of automatic speed control algorithm based on Green CBTC

    NASA Astrophysics Data System (ADS)

    Lin, Ying; Xiong, Hui; Wang, Xiaoliang; Wu, Youyou; Zhang, Chuanqi

    2017-06-01

    Automatic speed control algorithm is one of the core technologies of train operation control system. It’s a typical multi-objective optimization control algorithm, which achieve the train speed control for timing, comfort, energy-saving and precise parking. At present, the train speed automatic control technology is widely used in metro and inter-city railways. It has been found that the automatic speed control technology can effectively reduce the driver’s intensity, and improve the operation quality. However, the current used algorithm is poor at energy-saving, even not as good as manual driving. In order to solve the problem of energy-saving, this paper proposes an automatic speed control algorithm based on Green CBTC system. Based on the Green CBTC system, the algorithm can adjust the operation status of the train to improve the efficient using rate of regenerative braking feedback energy while ensuring the timing, comfort and precise parking targets. Due to the reason, the energy-using of Green CBTC system is lower than traditional CBTC system. The simulation results show that the algorithm based on Green CBTC system can effectively reduce the energy-using due to the improvement of the using rate of regenerative braking feedback energy.

  18. Design and performance evaluation of a high resolution IRI-microPET preclinical scanner

    NASA Astrophysics Data System (ADS)

    Islami rad, S. Z.; Peyvandi, R. Gholipour; lehdarboni, M. Askari; Ghafari, A. A.

    2015-05-01

    PET for small animal, IRI-microPET, was designed and built at the NSTRI. The scanner is made of four detectors positioned on a rotating gantry at a distance 50 mm from the center. Each detector consists of a 10×10 crystal matrix of 2×2×10 mm3 directly coupled to a PS-PMT. A position encoding circuit for specific PS-PMT has been designed, built and tested with a PD-MFS-2MS/s-8/14 data acquisition board. After implementing reconstruction algorithms (FBP, MLEM and SART) on sinograms, images quality and system performance were evaluated by energy resolution, timing resolution, spatial resolution, scatter fraction, sensitivity, RMS contrast and SNR parameters. The energy spectra were obtained for the crystals with an energy window of 300-700 keV. The energy resolution in 511 keV averaged over all modules, detectors, and crystals, was 23.5%. A timing resolution of 2.4 ns FWHM obtained by coincidence timing spectrum was measured with crystal LYSO. The radial and tangential resolutions for 18F (1.15-mm inner diameter) at the center of the field of view were 1.81 mm and 1.90 mm, respectively. At a radial offset of 5 mm, the FWHM values were 1.96 and 2.06 mm. The system scatter fraction was 7.1% for the mouse phantom. The sensitivity was measured for different energy windows, leading to a sensitivity of 1.74% at the center of FOV. Also, images quality was evaluated by RMS contrast and SNR factors, and the results show that the reconstructed images by MLEM algorithm have the best RMS contrast, and SNR. The IRI-microPET presents high image resolution, low scatter fraction values and improved SNR for animal studies.

  19. Intelligent Visual Input: A Graphical Method for Rapid Entry of Patient-Specific Data

    PubMed Central

    Bergeron, Bryan P.; Greenes, Robert A.

    1987-01-01

    Intelligent Visual Input (IVI) provides a rapid, graphical method of data entry for both expert system interaction and medical record keeping purposes. Key components of IVI include: a high-resolution graphic display; an interface supportive of rapid selection, i.e., one utilizing a mouse or light pen; algorithm simplification modules; and intelligent graphic algorithm expansion modules. A prototype IVI system, designed to facilitate entry of physical exam findings, is used to illustrates the potential advantages of this approach.

  20. SU-E-T-405: Evaluation of the Raystation Electron Monte Carlo Algorithm for Varian Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sansourekidou, P; Allen, C

    2015-06-15

    Purpose: To evaluate the Raystation v4.51 Electron Monte Carlo algorithm for Varian Trilogy, IX and 2100 series linear accelerators and commission for clinical use. Methods: Seventy two water and forty air scans were acquired with a water tank in the form of profiles and depth doses, as requested by vendor. Data was imported into Rayphysics beam modeling module. Energy spectrum was modeled using seven parameters. Contamination photons were modeled using five parameters. Source phase space was modeled using six parameters. Calculations were performed in clinical version 4.51 and percent depth dose curves and profiles were extracted to be compared tomore » water tank measurements. Sensitivity tests were performed for all parameters. Grid size and particle histories were evaluated per energy for statistical uncertainty performance. Results: Model accuracy for air profiles is poor in the shoulder and penumbra region. However, model accuracy for water scans is acceptable. All energies and cones are within 2%/2mm for 90% of the points evaluated. Source phase space parameters have a cumulative effect. To achieve distributions with satisfactory smoothness level a 0.1cm grid and 3,000,000 particle histories were used for commissioning calculations. Calculation time was approximately 3 hours per energy. Conclusion: Raystation electron Monte Carlo is acceptable for clinical use for the Varian accelerators listed. Results are inferior to Elekta Electron Monte Carlo modeling. Known issues were reported to Raysearch and will be resolved in upcoming releases. Auto-modeling is limited to open cone depth dose curves and needs expansion.« less

  1. NMR implementation of adiabatic SAT algorithm using strongly modulated pulses.

    PubMed

    Mitra, Avik; Mahesh, T S; Kumar, Anil

    2008-03-28

    NMR implementation of adiabatic algorithms face severe problems in homonuclear spin systems since the qubit selective pulses are long and during this period, evolution under the Hamiltonian and decoherence cause errors. The decoherence destroys the answer as it causes the final state to evolve to mixed state and in homonuclear systems, evolution under the internal Hamiltonian causes phase errors preventing the initial state to converge to the solution state. The resolution of these issues is necessary before one can proceed to implement an adiabatic algorithm in a large system where homonuclear coupled spins will become a necessity. In the present work, we demonstrate that by using "strongly modulated pulses" (SMPs) for the creation of interpolating Hamiltonian, one can circumvent both the problems and successfully implement the adiabatic SAT algorithm in a homonuclear three qubit system. This work also demonstrates that the SMPs tremendously reduce the time taken for the implementation of the algorithm, can overcome problems associated with decoherence, and will be the modality in future implementation of quantum information processing by NMR.

  2. Effects of image compression and degradation on an automatic diabetic retinopathy screening algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Barriga, S.; Murray, V.; Pattichis, M.; Soliz, P.

    2010-03-01

    Diabetic retinopathy (DR) is one of the leading causes of blindness among adult Americans. Automatic methods for detection of the disease have been developed in recent years, most of them addressing the segmentation of bright and red lesions. In this paper we present an automatic DR screening system that does approach the problem through the segmentation of features. The algorithm determines non-diseased retinal images from those with pathology based on textural features obtained using multiscale Amplitude Modulation-Frequency Modulation (AM-FM) decompositions. The decomposition is represented as features that are the inputs to a classifier. The algorithm achieves 0.88 area under the ROC curve (AROC) for a set of 280 images from the MESSIDOR database. The algorithm is then used to analyze the effects of image compression and degradation, which will be present in most actual clinical or screening environments. Results show that the algorithm is insensitive to illumination variations, but high rates of compression and large blurring effects degrade its performance.

  3. Robust phase retrieval of complex-valued object in phase modulation by hybrid Wirtinger flow method

    NASA Astrophysics Data System (ADS)

    Wei, Zhun; Chen, Wen; Yin, Tiantian; Chen, Xudong

    2017-09-01

    This paper presents a robust iterative algorithm, known as hybrid Wirtinger flow (HWF), for phase retrieval (PR) of complex objects from noisy diffraction intensities. Numerical simulations indicate that the HWF method consistently outperforms conventional PR methods in terms of both accuracy and convergence rate in multiple phase modulations. The proposed algorithm is also more robust to low oversampling ratios, loose constraints, and noisy environments. Furthermore, compared with traditional Wirtinger flow, sample complexity is largely reduced. It is expected that the proposed HWF method will find applications in the rapidly growing coherent diffractive imaging field for high-quality image reconstruction with multiple modulations, as well as other disciplines where PR is needed.

  4. A Software Architecture for Adaptive Modular Sensing Systems

    PubMed Central

    Lyle, Andrew C.; Naish, Michael D.

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. PMID:22163614

  5. A software architecture for adaptive modular sensing systems.

    PubMed

    Lyle, Andrew C; Naish, Michael D

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration.

  6. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.

    PubMed

    Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T

    2010-09-01

    To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  7. Energy-efficient constellations design and fast decoding for space-collaborative MIMO visible light communications

    NASA Astrophysics Data System (ADS)

    Zhu, Yi-Jun; Liang, Wang-Feng; Wang, Chao; Wang, Wen-Ya

    2017-01-01

    In this paper, space-collaborative constellations (SCCs) for indoor multiple-input multiple-output (MIMO) visible light communication (VLC) systems are considered. Compared with traditional VLC MIMO techniques, such as repetition coding (RC), spatial modulation (SM) and spatial multiplexing (SMP), SCC achieves the minimum average optical power for a fixed minimum Euclidean distance. We have presented a unified SCC structure for 2×2 MIMO VLC systems and extended it to larger MIMO VLC systems with more transceivers. Specifically for 2×2 MIMO VLC, a fast decoding algorithm is developed with decoding complexity almost linear in terms of the square root of the cardinality of SCC, and the expressions of symbol error rate of SCC are presented. In addition, bit mappings similar to Gray mapping are proposed for SCC. Computer simulations are performed to verify the fast decoding algorithm and the performance of SCC, and the results demonstrate that the performance of SCC is better than those of RC, SM and SMP for indoor channels in general.

  8. Control chart pattern recognition using RBF neural network with new training algorithm and practical features.

    PubMed

    Addeh, Abdoljalil; Khormali, Aminollah; Golilarz, Noorbakhsh Amiri

    2018-05-04

    The control chart patterns are the most commonly used statistical process control (SPC) tools to monitor process changes. When a control chart produces an out-of-control signal, this means that the process has been changed. In this study, a new method based on optimized radial basis function neural network (RBFNN) is proposed for control chart patterns (CCPs) recognition. The proposed method consists of four main modules: feature extraction, feature selection, classification and learning algorithm. In the feature extraction module, shape and statistical features are used. Recently, various shape and statistical features have been presented for the CCPs recognition. In the feature selection module, the association rules (AR) method has been employed to select the best set of the shape and statistical features. In the classifier section, RBFNN is used and finally, in RBFNN, learning algorithm has a high impact on the network performance. Therefore, a new learning algorithm based on the bees algorithm has been used in the learning module. Most studies have considered only six patterns: Normal, Cyclic, Increasing Trend, Decreasing Trend, Upward Shift and Downward Shift. Since three patterns namely Normal, Stratification, and Systematic are very similar to each other and distinguishing them is very difficult, in most studies Stratification and Systematic have not been considered. Regarding to the continuous monitoring and control over the production process and the exact type detection of the problem encountered during the production process, eight patterns have been investigated in this study. The proposed method is tested on a dataset containing 1600 samples (200 samples from each pattern) and the results showed that the proposed method has a very good performance. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Strawberry: Fast and accurate genome-guided transcript reconstruction and quantification from RNA-Seq.

    PubMed

    Liu, Ruolin; Dickerson, Julie

    2017-11-01

    We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.

  10. Service-Aware Clustering: An Energy-Efficient Model for the Internet-of-Things

    PubMed Central

    Bagula, Antoine; Abidoye, Ademola Philip; Zodi, Guy-Alain Lusilao

    2015-01-01

    Current generation wireless sensor routing algorithms and protocols have been designed based on a myopic routing approach, where the motes are assumed to have the same sensing and communication capabilities. Myopic routing is not a natural fit for the IoT, as it may lead to energy imbalance and subsequent short-lived sensor networks, routing the sensor readings over the most service-intensive sensor nodes, while leaving the least active nodes idle. This paper revisits the issue of energy efficiency in sensor networks to propose a clustering model where sensor devices’ service delivery is mapped into an energy awareness model, used to design a clustering algorithm that finds service-aware clustering (SAC) configurations in IoT settings. The performance evaluation reveals the relative energy efficiency of the proposed SAC algorithm compared to related routing algorithms in terms of energy consumption, the sensor nodes’ life span and its traffic engineering efficiency in terms of throughput and delay. These include the well-known low energy adaptive clustering hierarchy (LEACH) and LEACH-centralized (LEACH-C) algorithms, as well as the most recent algorithms, such as DECSA and MOCRN. PMID:26703619

  11. Service-Aware Clustering: An Energy-Efficient Model for the Internet-of-Things.

    PubMed

    Bagula, Antoine; Abidoye, Ademola Philip; Zodi, Guy-Alain Lusilao

    2015-12-23

    Current generation wireless sensor routing algorithms and protocols have been designed based on a myopic routing approach, where the motes are assumed to have the same sensing and communication capabilities. Myopic routing is not a natural fit for the IoT, as it may lead to energy imbalance and subsequent short-lived sensor networks, routing the sensor readings over the most service-intensive sensor nodes, while leaving the least active nodes idle. This paper revisits the issue of energy efficiency in sensor networks to propose a clustering model where sensor devices' service delivery is mapped into an energy awareness model, used to design a clustering algorithm that finds service-aware clustering (SAC) configurations in IoT settings. The performance evaluation reveals the relative energy efficiency of the proposed SAC algorithm compared to related routing algorithms in terms of energy consumption, the sensor nodes' life span and its traffic engineering efficiency in terms of throughput and delay. These include the well-known low energy adaptive clustering hierarchy (LEACH) and LEACH-centralized (LEACH-C) algorithms, as well as the most recent algorithms, such as DECSA and MOCRN.

  12. Peak reduction for commercial buildings using energy storage

    NASA Astrophysics Data System (ADS)

    Chua, K. H.; Lim, Y. S.; Morris, S.

    2017-11-01

    Battery-based energy storage has emerged as a cost-effective solution for peak reduction due to the decrement of battery’s price. In this study, a battery-based energy storage system is developed and implemented to achieve an optimal peak reduction for commercial customers with the limited energy capacity of the energy storage. The energy storage system is formed by three bi-directional power converter rated at 5 kVA and a battery bank with capacity of 64 kWh. Three control algorithms, namely fixed-threshold, adaptive-threshold, and fuzzy-based control algorithms have been developed and implemented into the energy storage system in a campus building. The control algorithms are evaluated and compared under different load conditions. The overall experimental results show that the fuzzy-based controller is the most effective algorithm among the three controllers in peak reduction. The fuzzy-based control algorithm is capable of incorporating a priori qualitative knowledge and expertise about the load characteristic of the buildings as well as the useable energy without over-discharging the batteries.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khachatryan, Vardan

    The performance of missing transverse energy reconstruction algorithms is presented by our team using√s=8 TeV proton-proton (pp) data collected with the CMS detector. Events with anomalous missing transverse energy are studied, and the performance of algorithms used to identify and remove these events is presented. The scale and resolution for missing transverse energy, including the effects of multiple pp interactions (pileup), are measured using events with an identified Z boson or isolated photon, and are found to be well described by the simulation. Novel missing transverse energy reconstruction algorithms developed specifically to mitigate the effects of large numbers of pileupmore » interactions on the missing transverse energy resolution are presented. These algorithms significantly reduce the dependence of the missing transverse energy resolution on pileup interactions. Furthermore, an algorithm that provides an estimate of the significance of the missing transverse energy is presented, which is used to estimate the compatibility of the reconstructed missing transverse energy with a zero nominal value.« less

  14. Automated Method of Frequency Determination in Software Metric Data Through the Use of the Multiple Signal Classification (MUSIC) Algorithm

    DTIC Science & Technology

    1998-06-26

    METHOD OF FREQUENCY DETERMINATION 4 IN SOFTWARE METRIC DATA THROUGH THE USE OF THE 5 MULTIPLE SIGNAL CLASSIFICATION ( MUSIC ) ALGORITHM 6 7 STATEMENT OF...graph showing the estimated power spectral 12 density (PSD) generated by the multiple signal classification 13 ( MUSIC ) algorithm from the data set used...implemented in this module; however, it is preferred to use 1 the Multiple Signal Classification ( MUSIC ) algorithm. The MUSIC 2 algorithm is

  15. System identification and model reduction using modulating function techniques

    NASA Technical Reports Server (NTRS)

    Shen, Yan

    1993-01-01

    Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.

  16. High-speed architecture for the decoding of trellis-coded modulation

    NASA Technical Reports Server (NTRS)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  17. Markov-modulated Markov chains and the covarion process of molecular evolution.

    PubMed

    Galtier, N; Jean-Marie, A

    2004-01-01

    The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.

  18. Fiber optic sensor for continuous health monitoring in CFRP composite materials

    NASA Astrophysics Data System (ADS)

    Rippert, Laurent; Papy, Jean-Michel; Wevers, Martine; Van Huffel, Sabine

    2002-07-01

    An intensity modulated sensor, based on the microbending concept, has been incorporated in laminates produced from a C/epoxy prepreg. Pencil lead break tests (Hsu-Neilsen sources) and tensile tests have been performed on this material. In this research study, fibre optic sensors will be proven to offer an alternative for the robust piezoelectric transducers used for Acoustic Emission (AE) monitoring. The main emphasis has been put on the use of advanced signal processing techniques based on time-frequency analysis. The signal Short Time Fourier Transform (STFT) has been computed and several robust noise reduction algorithms, such as Wiener adaptive filtering, improved spectral subtraction filtering, and Singular Value Decomposition (SVD) -based filtering, have been applied. An energy and frequency -based detection criterion is put forward to detect transient signals that can be correlated with Modal Acoustic Emission (MAE) results and thus damage in the composite material. There is a strong indication that time-frequency analysis and the Hankel Total Least Squares (HTLS) method can also be used for damage characterization. This study shows that the signal from a quite simple microbend optical sensor contains information on the elastic energy released whenever damage is being introduced in the host material by mechanical loading. Robust algorithms can be used to retrieve and analyze this information.

  19. Random Forest-Based Approach for Maximum Power Point Tracking of Photovoltaic Systems Operating under Actual Environmental Conditions.

    PubMed

    Shareef, Hussain; Mutlag, Ammar Hussein; Mohamed, Azah

    2017-01-01

    Many maximum power point tracking (MPPT) algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF) model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland-Altman test, with more than 95 percent acceptability.

  20. Estimating the entropy and quantifying the impurity of a swarm of surface-hopping trajectories: A new perspective on decoherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouyang, Wenjun; Subotnik, Joseph E., E-mail: subotnik@sas.upenn.edu

    2014-05-28

    In this article, we consider the intrinsic entropy of Tully's fewest switches surface hopping (FSSH) algorithm (as estimated by the impurity of the density matrix) [J. Chem. Phys. 93, 1061 (1990)]. We show that, even for a closed system, the total impurity of a FSSH calculation increases in time (rather than stays constant). This apparent failure of the FSSH algorithm can be traced back to an incorrect, approximate treatment of the electronic coherence between wavepackets moving along different potential energy surfaces. This incorrect treatment of electronic coherence also prevents the FSSH algorithm from correctly describing wavepacket recoherences (which is amore » well established limitation of the FSSH method). Nevertheless, despite these limitations, the FSSH algorithm often predicts accurate observables because the electronic coherence density is modulated by a phase factor which varies rapidly in phase space and which often integrates to almost zero. Adding “decoherence” events on top of a FSSH calculation completely destroys the incorrect FSSH electronic coherence and effectively sets the Poincaré recurrence time for wavepacket recoherence to infinity; this modification usually increases FSSH accuracy (assuming there are no recoherences) while also offering long-time stability for trajectories. In practice, we show that introducing “decoherence” events does not change the total FSSH impurity significantly, but does lead to more accurate evaluations of the impurity of the electronic subsystem.« less

  1. Random Forest-Based Approach for Maximum Power Point Tracking of Photovoltaic Systems Operating under Actual Environmental Conditions

    PubMed Central

    Shareef, Hussain; Mohamed, Azah

    2017-01-01

    Many maximum power point tracking (MPPT) algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF) model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland–Altman test, with more than 95 percent acceptability. PMID:28702051

  2. A Distance-based Energy Aware Routing algorithm for wireless sensor networks.

    PubMed

    Wang, Jin; Kim, Jeong-Uk; Shu, Lei; Niu, Yu; Lee, Sungyoung

    2010-01-01

    Energy efficiency and balancing is one of the primary challenges for wireless sensor networks (WSNs) since the tiny sensor nodes cannot be easily recharged once they are deployed. Up to now, many energy efficient routing algorithms or protocols have been proposed with techniques like clustering, data aggregation and location tracking etc. However, many of them aim to minimize parameters like total energy consumption, latency etc., which cause hotspot nodes and partitioned network due to the overuse of certain nodes. In this paper, a Distance-based Energy Aware Routing (DEAR) algorithm is proposed to ensure energy efficiency and energy balancing based on theoretical analysis of different energy and traffic models. During the routing process, we consider individual distance as the primary parameter in order to adjust and equalize the energy consumption among involved sensors. The residual energy is also considered as a secondary factor. In this way, all the intermediate nodes will consume their energy at similar rate, which maximizes network lifetime. Simulation results show that the DEAR algorithm can reduce and balance the energy consumption for all sensor nodes so network lifetime is greatly prolonged compared to other routing algorithms.

  3. The algorithm for automatic detection of the calibration object

    NASA Astrophysics Data System (ADS)

    Artem, Kruglov; Irina, Ugfeld

    2017-06-01

    The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.

  4. A novel pulse compression algorithm for frequency modulated active thermography using band-pass filter

    NASA Astrophysics Data System (ADS)

    Chatterjee, Krishnendu; Roy, Deboshree; Tuli, Suneet

    2017-05-01

    This paper proposes a novel pulse compression algorithm, in the context of frequency modulated thermal wave imaging. The compression filter is derived from a predefined reference pixel in a recorded video, which contains direct measurement of the excitation signal alongside the thermal image of a test piece. The filter causes all the phases of the constituent frequencies to be adjusted to nearly zero value, so that on reconstruction a pulse is obtained. Further, due to band-limited nature of the excitation, signal-to-noise ratio is improved by suppressing out-of-band noise. The result is similar to that of a pulsed thermography experiment, although the peak power is drastically reduced. The algorithm is successfully demonstrated on mild steel and carbon fibre reference samples. Objective comparisons of the proposed pulse compression algorithm with the existing techniques are presented.

  5. Design and evaluation of a SiPM-based large-area detector module for positron emission imaging

    NASA Astrophysics Data System (ADS)

    Alva-Sánchez, H.; Murrieta-Rodríguez, T.; Calva-Coraza, E.; Martínez-Dávalos, A.; Rodríguez-Villafuerte, M.

    2018-03-01

    The design and evaluation of a large-area detector module for positron emission imaging applications, is presented. The module features a SensL ArrayC-60035-64P-PCB solid state detector (8×8 array of tileable silicon photomultipliers by SensL, 7.2 mm pitch) covering a total area of 57.4×57.4 mm2. The detector module was formed using a pixelated array of 40×40 lutetium-yttrium oxyorthosilicate (LYSO) scintillator crystal elements with 1.43 mm pitch. A 7 mm thick coupling light guide was used to allow light sharing between adjacent SiPM. A 16-channel symmetric charge division (SCD) readout board was designed to multiplex the number of signals from 64 to 16 (8 columns and 8 rows) and a center-of-gravity algorithm to identify the position. Data acquisition and digitization was accomplished using a custom-made system based on FPGAs boards. Crystal maps were obtained using 18F-positron sources and Voronoi diagrams were used to correct for geometric distortions and to generate a non-uniformity correction matrix. All measurements were taken at a controlled room temperature of 22oC. The crystal maps showed minor distortion, 90% of the 1600 total crystal elements could be identified, a mean peak-to-valley ratio of 4.3 was obtained and a 10.8% mean energy resolution for 511 keV annihilation photons was determined. The performance of the detector using our own readout board was compared to that using two different commercially readout boards using the same detector module arrangement. We show that these large-area SiPM arrays, combined with a 16-channel SCD readout board, can offer high spatial resolution, excellent energy resolution and detector uniformity and thus, can be used for positron emission imaging applications.

  6. Tidal current and tidal energy changes imposed by a dynamic tidal power system in the Taiwan Strait, China

    NASA Astrophysics Data System (ADS)

    Dai, Peng; Zhang, Jisheng; Zheng, Jinhai

    2017-12-01

    The Taiwan Strait has recently been proposed as a promising site for dynamic tidal power systems because of its shallow depth and strong tides. Dynamic tidal power is a new concept for extracting tidal potential energy in which a coast-perpendicular dike is used to create water head and generate electricity via turbines inserted in the dike. Before starting such a project, the potential power output and hydrodynamic impacts of the dike must be assessed. In this study, a two-dimensional numerical model based on the Delft3D-FLOW module is established to simulate tides in China. A dike module is developed to account for turbine processes and estimate power output by integrating a special algorithm into the model. The domain decomposition technique is used to divide the computational zone into two subdomains with grid refinement near the dike. The hydrodynamic processes predicted by the model, both with and without the proposed construction, are examined in detail, including tidal currents and tidal energy flux. The predicted time-averaged power yields with various opening ratios are presented. The results show that time-averaged power yield peaks at an 8% opening ratio. For semidiurnal tides, the flow velocity increases in front of the head of the dike and decreases on either side. For diurnal tides, these changes are complicated by the oblique incidence of tidal currents with respect to the dike as well as by bathymetric features. The dike itself blocks the propagation of tidal energy flux.

  7. Improved Diagnostic Validity of the ADOS Revised Algorithms: A Replication Study in an Independent Sample

    ERIC Educational Resources Information Center

    Oosterling, Iris; Roos, Sascha; de Bildt, Annelies; Rommelse, Nanda; de Jonge, Maretha; Visser, Janne; Lappenschaar, Martijn; Swinkels, Sophie; van der Gaag, Rutger Jan; Buitelaar, Jan

    2010-01-01

    Recently, Gotham et al. ("2007") proposed revised algorithms for the Autism Diagnostic Observation Schedule (ADOS) with improved diagnostic validity. The aim of the current study was to replicate predictive validity, factor structure, and correlations with age and verbal and nonverbal IQ of the ADOS revised algorithms for Modules 1 and 2…

  8. Joint OSNR monitoring and modulation format identification in digital coherent receivers using deep neural networks.

    PubMed

    Khan, Faisal Nadeem; Zhong, Kangping; Zhou, Xian; Al-Arashi, Waled Hussein; Yu, Changyuan; Lu, Chao; Lau, Alan Pak Tao

    2017-07-24

    We experimentally demonstrate the use of deep neural networks (DNNs) in combination with signals' amplitude histograms (AHs) for simultaneous optical signal-to-noise ratio (OSNR) monitoring and modulation format identification (MFI) in digital coherent receivers. The proposed technique automatically extracts OSNR and modulation format dependent features of AHs, obtained after constant modulus algorithm (CMA) equalization, and exploits them for the joint estimation of these parameters. Experimental results for 112 Gbps polarization-multiplexed (PM) quadrature phase-shift keying (QPSK), 112 Gbps PM 16 quadrature amplitude modulation (16-QAM), and 240 Gbps PM 64-QAM signals demonstrate OSNR monitoring with mean estimation errors of 1.2 dB, 0.4 dB, and 1 dB, respectively. Similarly, the results for MFI show 100% identification accuracy for all three modulation formats. The proposed technique applies deep machine learning algorithms inside standard digital coherent receiver and does not require any additional hardware. Therefore, it is attractive for cost-effective multi-parameter estimation in next-generation elastic optical networks (EONs).

  9. Solar energy modulator

    NASA Technical Reports Server (NTRS)

    Hale, R. R. (Inventor); Mcdougal, A. R.

    1984-01-01

    A module is described with a receiver having a solar energy acceptance opening and supported by a mounting ring along the optic axis of a parabolic mirror in coaxial alignment for receiving solar energy from the mirror, and a solar flux modulator plate for varying the quantity of solar energy flux received by the acceptance opening of the module. The modulator plate is characterized by an annular, plate-like body, the internal diameter of which is equal to or slightly greater than the diameter of the solar energy acceptance opening of the receiver. Slave cylinders are connected to the modulator plate for supporting the plate for axial displacement along the axis of the mirror, therby shading the opening with respect to solar energy flux reflected from the surface of the mirror to the solar energy acceptance opening.

  10. Testing activities at the National Battery Test Laboratory

    NASA Astrophysics Data System (ADS)

    Hornstra, F.; Deluca, W. H.; Mulcahey, T. P.

    The National Battery Test Laboratory (NBTL) is an Argonne National Laboratory facility for testing, evaluating, and studying advanced electric storage batteries. The facility tests batteries developed under Department of Energy programs and from private industry. These include batteries intended for future electric vehicle (EV) propulsion, electric utility load leveling (LL), and solar energy storage. Since becoming operational, the NBTL has evaluated well over 1400 cells (generally in the form of three- to six-cell modules, but up to 140-cell batteries) of various technologies. Performance characterization assessments are conducted under a series of charge/discharge cycles with constant current, constant power, peak power, and computer simulated dynamic load profile conditions. Flexible charging algorithms are provided to accommodate the specific needs of each battery under test. Special studies are conducted to explore and optimize charge procedures, to investigate the impact of unique load demands on battery performance, and to analyze the thermal management requirements of battery systems.

  11. Integrated tests of a high speed VXS switch card and 250 MSPS flash ADCs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    H. Dong, C. Cuevas, D. Curry, E. Jastrzembski, F. Barbosa, J. Wilson, M. Taylor, B. Raydo

    2008-01-01

    High trigger rate nuclear physics experiments proposed for the 12 GeV upgrade at the Thomas Jefferson National Accelerator Facility create a need for new high speed digital systems for energy summing. Signals from electronic detectors will be captured with the Jefferson Lab FADC module, which collects and processes data from 16 charged particle sensors with 10 or 12 bit resolution at 250 MHz sample rate. Up to sixteen FADC modules transfer energy information to a central energy summing module for each readout crate. The sums from the crates are combined to form a global energy sum that is used tomore » trigger data readout for all modules. The Energy Sum module and FADC modules have been designed using the VITA-41 VME64 switched serial (VXS) standard. The VITA- 41 standard defines payload and switch slot module functions, and offers an elegant engineered solution for Multi-Gigabit serial transmission on a standard VITA-41 backplane. The Jefferson Lab Energy Sum module receives data serially at a rate of up to 6 Giga-bits per second from the FADC modules. Both FADC and Energy Sum modules have been designed and assembled and this paper describes the integrated tests using both high speed modules in unison« less

  12. A Novel Energy Saving Algorithm with Frame Response Delay Constraint in IEEE 802.16e

    NASA Astrophysics Data System (ADS)

    Nga, Dinh Thi Thuy; Kim, Mingon; Kang, Minho

    Sleep-mode operation of a Mobile Subscriber Station (MSS) in IEEE 802.16e effectively saves energy consumption; however, it induces frame response delay. In this letter, we propose an algorithm to quickly find the optimal value of the final sleep interval in sleep-mode in order to minimize energy consumption with respect to a given frame response delay constraint. The validations of our proposed algorithm through analytical results and simulation results suggest that our algorithm provide a potential guidance to energy saving.

  13. Solving Energy-Aware Real-Time Tasks Scheduling Problem with Shuffled Frog Leaping Algorithm on Heterogeneous Platforms

    PubMed Central

    Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M.K.

    2015-01-01

    Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution. PMID:26110406

  14. T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors.

    PubMed

    Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun

    2016-07-08

    Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction.

  15. Dual energy CT with one full scan and a second sparse-view scan using structure preserving iterative reconstruction (SPIR)

    NASA Astrophysics Data System (ADS)

    Wang, Tonghe; Zhu, Lei

    2016-09-01

    Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an average error of less than 1%.

  16. Achieving energy efficiency during collective communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundriyal, Vaibhav; Sosonkina, Masha; Zhang, Zhao

    2012-09-13

    Energy consumption has become a major design constraint in modern computing systems. With the advent of petaflops architectures, power-efficient software stacks have become imperative for scalability. Techniques such as dynamic voltage and frequency scaling (called DVFS) and CPU clock modulation (called throttling) are often used to reduce the power consumption of the compute nodes. To avoid significant performance losses, these techniques should be used judiciously during parallel application execution. For example, its communication phases may be good candidates to apply the DVFS and CPU throttling without incurring a considerable performance loss. They are often considered as indivisible operations although littlemore » attention is being devoted to the energy saving potential of their algorithmic steps. In this work, two important collective communication operations, all-to-all and allgather, are investigated as to their augmentation with energy saving strategies on the per-call basis. The experiments prove the viability of such a fine-grain approach. They also validate a theoretical power consumption estimate for multicore nodes proposed here. While keeping the performance loss low, the obtained energy savings were always significantly higher than those achieved when DVFS or throttling were switched on across the entire application run« less

  17. Compton scattering collision module for OSIRIS

    NASA Astrophysics Data System (ADS)

    Del Gaudio, Fabrizio; Grismayer, Thomas; Fonseca, Ricardo; Silva, Luís

    2017-10-01

    Compton scattering plays a fundamental role in a variety of different astrophysical environments, such as at the gaps of pulsars and the stagnation surface of black holes. In these scenarios, Compton scattering is coupled with self-consistent mechanisms such as pair cascades. We present the implementation of a novel module, embedded in the self-consistent framework of the PIC code OSIRIS 4.0, capable of simulating Compton scattering from first principles and that is fully integrated with the self-consistent plasma dynamics. The algorithm accounts for the stochastic nature of Compton scattering reproducing without approximations the exchange of energy between photons and unbound charged species. We present benchmarks of the code against the analytical results of Blumenthal et al. and the numerical solution of the linear Kompaneets equation and good agreement is found between the simulations and the theoretical models. This work is supported by the European Research Council Grant (ERC- 2015-AdG 695088) and the Fundao para a Céncia e Tecnologia (Bolsa de Investigao PD/BD/114323/2016).

  18. Effects of activity and energy budget balancing algorithm on laboratory performance of a fish bioenergetics model

    USGS Publications Warehouse

    Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.

    2012-01-01

    We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.

  19. Experimental verification of a commercial Monte Carlo-based dose calculation module for high-energy photon beams.

    PubMed

    Künzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar

    2009-12-21

    The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (gamma) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% +/- 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 x 10 cm(2) field at the first density interface from tissue to lung equivalent material. Small fields (2 x 2 cm(2)) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the heterogeneous phantom. For the clinical test cases, the average dose discrepancy was 0.5% +/- 1.1%. Relative dose investigations of the transverse plane for clinical beam arrangements were performed with a 2D gamma-evaluation procedure. For 3% dose difference and 3 mm DTA criteria, the average value for gamma(>1) was 4.7% +/- 3.7%, the average gamma(1%) value was 1.19 +/- 0.16 and the mean 2D gamma-value was 0.44 +/- 0.07 in the heterogeneous phantom. The iPlan MC algorithm leads to accurate dosimetric results under clinical test conditions.

  20. Hierarchical image segmentation via recursive superpixel with adaptive regularity

    NASA Astrophysics Data System (ADS)

    Nakamura, Kensuke; Hong, Byung-Woo

    2017-11-01

    A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.

  1. An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques

    DTIC Science & Technology

    2018-01-09

    ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological and...is no longer needed. Do not return it to the originator. ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy ...4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques 5a. CONTRACT NUMBER

  2. Validating module network learning algorithms using simulated data.

    PubMed

    Michoel, Tom; Maere, Steven; Bonnet, Eric; Joshi, Anagha; Saeys, Yvan; Van den Bulcke, Tim; Van Leemput, Koenraad; van Remortel, Piet; Kuiper, Martin; Marchal, Kathleen; Van de Peer, Yves

    2007-05-03

    In recent years, several authors have used probabilistic graphical models to learn expression modules and their regulatory programs from gene expression data. Despite the demonstrated success of such algorithms in uncovering biologically relevant regulatory relations, further developments in the area are hampered by a lack of tools to compare the performance of alternative module network learning strategies. Here, we demonstrate the use of the synthetic data generator SynTReN for the purpose of testing and comparing module network learning algorithms. We introduce a software package for learning module networks, called LeMoNe, which incorporates a novel strategy for learning regulatory programs. Novelties include the use of a bottom-up Bayesian hierarchical clustering to construct the regulatory programs, and the use of a conditional entropy measure to assign regulators to the regulation program nodes. Using SynTReN data, we test the performance of LeMoNe in a completely controlled situation and assess the effect of the methodological changes we made with respect to an existing software package, namely Genomica. Additionally, we assess the effect of various parameters, such as the size of the data set and the amount of noise, on the inference performance. Overall, application of Genomica and LeMoNe to simulated data sets gave comparable results. However, LeMoNe offers some advantages, one of them being that the learning process is considerably faster for larger data sets. Additionally, we show that the location of the regulators in the LeMoNe regulation programs and their conditional entropy may be used to prioritize regulators for functional validation, and that the combination of the bottom-up clustering strategy with the conditional entropy-based assignment of regulators improves the handling of missing or hidden regulators. We show that data simulators such as SynTReN are very well suited for the purpose of developing, testing and improving module network algorithms. We used SynTReN data to develop and test an alternative module network learning strategy, which is incorporated in the software package LeMoNe, and we provide evidence that this alternative strategy has several advantages with respect to existing methods.

  3. Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks

    PubMed Central

    2014-01-01

    Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226

  4. Conical : An extended module for computing a numerically satisfactory pair of solutions of the differential equation for conical functions

    NASA Astrophysics Data System (ADS)

    Dunster, T. M.; Gil, A.; Segura, J.; Temme, N. M.

    2017-08-01

    Conical functions appear in a large number of applications in physics and engineering. In this paper we describe an extension of our module Conical (Gil et al., 2012) for the computation of conical functions. Specifically, the module includes now a routine for computing the function R-1/2+ iτ m (x) , a real-valued numerically satisfactory companion of the function P-1/2+ iτ m (x) for x > 1. In this way, a natural basis for solving Dirichlet problems bounded by conical domains is provided. The module also improves the performance of our previous algorithm for the conical function P-1/2+ iτ m (x) and it includes now the computation of the first order derivative of the function. This is also considered for the function R-1/2+ iτ m (x) in the extended algorithm.

  5. Selective Sensing of Gas Mixture via a Temperature Modulation Approach: New Strategy for Potentiometric Gas Sensor Obtaining Satisfactory Discriminating Features.

    PubMed

    Li, Fu-An; Jin, Han; Wang, Jinxia; Zou, Jie; Jian, Jiawen

    2017-03-12

    A new strategy to discriminate four types of hazardous gases is proposed in this research. Through modulating the operating temperature and the processing response signal with a pattern recognition algorithm, a gas sensor consisting of a single sensing electrode, i.e., ZnO/In₂O₃ composite, is designed to differentiate NO₂, NH₃, C₃H₆, CO within the level of 50-400 ppm. Results indicate that with adding 15 wt.% ZnO to In₂O₃, the sensor fabricated at 900 °C shows optimal sensing characteristics in detecting all the studied gases. Moreover, with the aid of the principle component analysis (PCA) algorithm, the sensor operating in the temperature modulation mode demonstrates acceptable discrimination features. The satisfactory discrimination features disclose the future that it is possible to differentiate gas mixture efficiently through operating a single electrode sensor at temperature modulation mode.

  6. Intelligent deflection routing in buffer-less networks.

    PubMed

    Haeri, Soroush; Trajković, Ljiljana

    2015-02-01

    Deflection routing is employed to ameliorate packet loss caused by contention in buffer-less architectures such as optical burst-switched networks. The main goal of deflection routing is to successfully deflect a packet based only on a limited knowledge that network nodes possess about their environment. In this paper, we present a framework that introduces intelligence to deflection routing (iDef). iDef decouples the design of the signaling infrastructure from the underlying learning algorithm. It consists of a signaling and a decision-making module. Signaling module implements a feedback management protocol while the decision-making module implements a reinforcement learning algorithm. We also propose several learning-based deflection routing protocols, implement them in iDef using the ns-3 network simulator, and compare their performance.

  7. Architecture and Implementation of OpenPET Firmware and Embedded Software

    DOE PAGES

    Abu-Nimeh, Faisal T.; Ito, Jennifer; Moses, William W.; ...

    2016-01-11

    OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the versatility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics-a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures andmore » implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration.« less

  8. Spectrum sensing algorithm based on autocorrelation energy in cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Ren, Shengwei; Zhang, Li; Zhang, Shibing

    2016-10-01

    Cognitive radio networks have wide applications in the smart home, personal communications and other wireless communication. Spectrum sensing is the main challenge in cognitive radios. This paper proposes a new spectrum sensing algorithm which is based on the autocorrelation energy of signal received. By taking the autocorrelation energy of the received signal as the statistics of spectrum sensing, the effect of the channel noise on the detection performance is reduced. Simulation results show that the algorithm is effective and performs well in low signal-to-noise ratio. Compared with the maximum generalized eigenvalue detection (MGED) algorithm, function of covariance matrix based detection (FMD) algorithm and autocorrelation-based detection (AD) algorithm, the proposed algorithm has 2 11 dB advantage.

  9. A low-dispersion, exactly energy-charge-conserving semi-implicit relativistic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Luis, Chacon; Bird, Robert; Stark, David; Yin, Lin; Albright, Brian

    2017-10-01

    Leap-frog based explicit algorithms, either ``energy-conserving'' or ``momentum-conserving'', do not conserve energy discretely. Time-centered fully implicit algorithms can conserve discrete energy exactly, but introduce large dispersion errors in the light-wave modes, regardless of timestep sizes. This can lead to intolerable simulation errors where highly accurate light propagation is needed (e.g. laser-plasma interactions, LPI). In this study, we selectively combine the leap-frog and Crank-Nicolson methods to produce a low-dispersion, exactly energy-and-charge-conserving PIC algorithm. Specifically, we employ the leap-frog method for Maxwell equations, and the Crank-Nicolson method for particle equations. Such an algorithm admits exact global energy conservation, exact local charge conservation, and preserves the dispersion properties of the leap-frog method for the light wave. The algorithm has been implemented in a code named iVPIC, based on the VPIC code developed at LANL. We will present numerical results that demonstrate the properties of the scheme with sample test problems (e.g. Weibel instability run for 107 timesteps, and LPI applications.

  10. Measuring modules for the research of compensators of reactive power with voltage stabilization in MATLAB

    NASA Astrophysics Data System (ADS)

    Vlasayevsky, Stanislav; Klimash, Stepan; Klimash, Vladimir

    2017-10-01

    A set of mathematical modules was developed for evaluation the energy performance in the research of electrical systems and complexes in the MatLab. In the electrotechnical library SimPowerSystems of the MatLab software, there are no measuring modules of energy coefficients characterizing the quality of electricity and the energy efficiency of electrical apparatus. Modules are designed to calculate energy coefficients characterizing the quality of electricity (current distortion and voltage distortion) and energy efficiency indicators (power factor and efficiency) are presented. There are described the methods and principles of building the modules. The detailed schemes of modules built on the elements of the Simulink Library are presented, in this connection, these modules are compatible with mathematical models of electrical systems and complexes in the MatLab. Also there are presented the results of the testing of the developed modules and the results of their verification on the schemes that have analytical expressions of energy indicators.

  11. Task Decomposition Module For Telerobot Trajectory Generation

    NASA Astrophysics Data System (ADS)

    Wavering, Albert J.; Lumia, Ron

    1988-10-01

    A major consideration in the design of trajectory generation software for a Flight Telerobotic Servicer (FTS) is that the FTS will be called upon to perform tasks which require a diverse range of manipulator behaviors and capabilities. In a hierarchical control system where tasks are decomposed into simpler and simpler subtasks, the task decomposition module which performs trajectory planning and execution should therefore be able to accommodate a wide range of algorithms. In some cases, it will be desirable to plan a trajectory for an entire motion before manipulator motion commences, as when optimizing over the entire trajectory. Many FTS motions, however, will be highly sensory-interactive, such as moving to attain a desired position relative to a non-stationary object whose position is periodically updated by a vision system. In this case, the time-varying nature of the trajectory may be handled either by frequent replanning using updated sensor information, or by using an algorithm which creates a less specific state-dependent plan that determines the manipulator path as the trajectory is executed (rather than a priori). This paper discusses a number of trajectory generation techniques from these categories and how they may be implemented in a task decompo-sition module of a hierarchical control system. The structure, function, and interfaces of the proposed trajectory gener-ation module are briefly described, followed by several examples of how different algorithms may be performed by the module. The proposed task decomposition module provides a logical structure for trajectory planning and execution, and supports a large number of published trajectory generation techniques.

  12. Parallel Computational Protein Design.

    PubMed

    Zhou, Yichao; Donald, Bruce R; Zeng, Jianyang

    2017-01-01

    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab (Gainza et al., Methods Enzymol 523:87, 2013) to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE (Gainza et al., PLoS Comput Biol 8:e1002335, 2012) and DEEPer (Hallen et al., Proteins 81:18-39, 2013) to also consider continuous backbone and side-chain flexibility.

  13. Prediction of distribution coefficient from structure. 1. Estimation method.

    PubMed

    Csizmadia, F; Tsantili-Kakoulidou, A; Panderi, I; Darvas, F

    1997-07-01

    A method has been developed for the estimation of the distribution coefficient (D), which considers the microspecies of a compound. D is calculated from the microscopic dissociation constants (microconstants), the partition coefficients of the microspecies, and the counterion concentration. A general equation for the calculation of D at a given pH is presented. The microconstants are calculated from the structure using Hammett and Taft equations. The partition coefficients of the ionic microspecies are predicted by empirical equations using the dissociation constants and the partition coefficient of the uncharged species, which are estimated from the structure by a Linear Free Energy Relationship method. The algorithm is implemented in a program module called PrologD.

  14. Development of an algorithm for improving quality and information processing capacity of MathSpeak synthetic speech renderings.

    PubMed

    Isaacson, M D; Srinivasan, S; Lloyd, L L

    2010-01-01

    MathSpeak is a set of rules for non speaking of mathematical expressions. These rules have been incorporated into a computerised module that translates printed mathematics into the non-ambiguous MathSpeak form for synthetic speech rendering. Differences between individual utterances produced with the translator module are difficult to discern because of insufficient pausing between utterances; hence, the purpose of this study was to develop an algorithm for improving the synthetic speech rendering of MathSpeak. To improve synthetic speech renderings, an algorithm for inserting pauses was developed based upon recordings of middle and high school math teachers speaking mathematic expressions. Efficacy testing of this algorithm was conducted with college students without disabilities and high school/college students with visual impairments. Parameters measured included reception accuracy, short-term memory retention, MathSpeak processing capacity and various rankings concerning the quality of synthetic speech renderings. All parameters measured showed statistically significant improvements when the algorithm was used. The algorithm improves the quality and information processing capacity of synthetic speech renderings of MathSpeak. This increases the capacity of individuals with print disabilities to perform mathematical activities and to successfully fulfill science, technology, engineering and mathematics academic and career objectives.

  15. Unsupervised Learning of Overlapping Image Components Using Divisive Input Modulation

    PubMed Central

    Spratling, M. W.; De Meyer, K.; Kompass, R.

    2009-01-01

    This paper demonstrates that nonnegative matrix factorisation is mathematically related to a class of neural networks that employ negative feedback as a mechanism of competition. This observation inspires a novel learning algorithm which we call Divisive Input Modulation (DIM). The proposed algorithm provides a mathematically simple and computationally efficient method for the unsupervised learning of image components, even in conditions where these elementary features overlap considerably. To test the proposed algorithm, a novel artificial task is introduced which is similar to the frequently-used bars problem but employs squares rather than bars to increase the degree of overlap between components. Using this task, we investigate how the proposed method performs on the parsing of artificial images composed of overlapping features, given the correct representation of the individual components; and secondly, we investigate how well it can learn the elementary components from artificial training images. We compare the performance of the proposed algorithm with its predecessors including variations on these algorithms that have produced state-of-the-art performance on the bars problem. The proposed algorithm is more successful than its predecessors in dealing with overlap and occlusion in the artificial task that has been used to assess performance. PMID:19424442

  16. Comparison of optimization algorithms in intensity-modulated radiation therapy planning

    NASA Astrophysics Data System (ADS)

    Kendrick, Rachel

    Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.

  17. Performance of the CMS missing transverse momentum reconstruction in pp data at $$\\sqrt{s}$$ = 8 TeV

    DOE PAGES

    Khachatryan, Vardan

    2015-02-12

    The performance of missing transverse energy reconstruction algorithms is presented by our team using√s=8 TeV proton-proton (pp) data collected with the CMS detector. Events with anomalous missing transverse energy are studied, and the performance of algorithms used to identify and remove these events is presented. The scale and resolution for missing transverse energy, including the effects of multiple pp interactions (pileup), are measured using events with an identified Z boson or isolated photon, and are found to be well described by the simulation. Novel missing transverse energy reconstruction algorithms developed specifically to mitigate the effects of large numbers of pileupmore » interactions on the missing transverse energy resolution are presented. These algorithms significantly reduce the dependence of the missing transverse energy resolution on pileup interactions. Furthermore, an algorithm that provides an estimate of the significance of the missing transverse energy is presented, which is used to estimate the compatibility of the reconstructed missing transverse energy with a zero nominal value.« less

  18. Modeling Atmospheric Aerosols in WRF/Chem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yang; Hu, X.-M.; Howell, G.

    2005-06-01

    In this study, three aerosol modules are tested and compared. The first module is the Modal Aerosol Dynamics Model for Europe (MADE) with the secondary organic aerosol model (SORGAM) (referred to as MADE/SORGAM). The second module is the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC). The third module is the Model of Aerosol Dynamics, Reaction, Ionization and Dissolution (MADRID). The three modules differ in terms of size representation used, chemical species treated, assumptions and numerical algorithms used. Table 1 compares the major processes among the three aerosol modules.

  19. Wireless acoustic modules for real-time data fusion using asynchronous sniper localization algorithms

    NASA Astrophysics Data System (ADS)

    Hengy, S.; De Mezzo, S.; Duffner, P.; Naz, P.

    2012-11-01

    The presence of snipers in modern conflicts leads to high insecurity for the soldiers. In order to improve the soldier's protection against this threat, the French German Research Institute of Saint-Louis (ISL) has been conducting studies in the domain of acoustic localization of shots. Mobile antennas mounted on the soldier's helmet were initially used for real-time detection, classification and localization of sniper shots. It showed good performances in land scenarios, but also in urban scenarios if the array was in the shot corridor, meaning that the microphones first detect the direct wave and then the reflections of the Mach and muzzle waves (15% distance estimation error compared to the actual shooter array distance). Fusing data sent by multiple sensor nodes distributed on the field showed some of the limitations of the technologies that have been implemented in ISL's demonstrators. Among others, the determination of the arrays' orientation was not accurate enough, thereby degrading the performance of data fusion. Some new solutions have been developed in the past year in order to obtain better performance for data fusion. Asynchronous localization algorithms have been developed and post-processed on data measured in both free-field and urban environments with acoustic modules on the line of sight of the shooter. These results are presented in the first part of the paper. The impact of GPS position estimation error is also discussed in the article in order to evaluate the possible use of those algorithms for real-time processing using mobile acoustic nodes. In the frame of ISL's transverse project IMOTEP (IMprovement Of optical and acoustical TEchnologies for the Protection), some demonstrators are developed that will allow real-time asynchronous localization of sniper shots. An embedded detection and classification algorithm is implemented on wireless acoustic modules that send the relevant information to a central PC. Data fusion is then processed and the estimated position of the shooter is sent back to the users. A SWIR active imaging system is used for localization refinement. A built-in DSP is related to the detection/classification tasks for each acoustic module. A GPS module is used for time difference of arrival and module's position estimation. Wireless communication is supported using ZigBee technology. These acoustic modules are described in the article and first results of real-time asynchronous sniper localization using those modules are discussed.

  20. NREL module energy rating methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitaker, C.; Newmiller, J.; Kroposki, B.

    1995-11-01

    The goals of this project were to develop a tool for: evaluating one module in different climates; comparing different modules; provide a Q&D method for estimating periodic energy production; provide an achievable module rating; provide an incentive for manufacturers to optimize modules to non-STC conditions; and to have a consensus-based, NREL-sponsored activity. The approach taken was to simulate module energy for five reference days of various weather conditions. A performance model was developed.

  1. Development of Low Cost, High Energy-Per-Unit-Area Solar Cell Modules

    NASA Technical Reports Server (NTRS)

    Jones, G. T.; Chitre, S.

    1977-01-01

    Work on the development of low cost, high energy per unit area solar cell modules was conducted. Hexagonal solar cell and module efficiencies, module packing ratio, and solar cell design calculations were made. The cell grid structure and interconnection pattern was designed and the module substrates were fabricated for the three modules to be used. It was demonstrated that surface macrostructures significantly improve cell power output and photovoltaic energy conversion efficiency.

  2. Energy-aware scheduling of surveillance in wireless multimedia sensor networks.

    PubMed

    Wang, Xue; Wang, Sheng; Ma, Junjie; Sun, Xinyao

    2010-01-01

    Wireless sensor networks involve a large number of sensor nodes with limited energy supply, which impacts the behavior of their application. In wireless multimedia sensor networks, sensor nodes are equipped with audio and visual information collection modules. Multimedia contents are ubiquitously retrieved in surveillance applications. To solve the energy problems during target surveillance with wireless multimedia sensor networks, an energy-aware sensor scheduling method is proposed in this paper. Sensor nodes which acquire acoustic signals are deployed randomly in the sensing fields. Target localization is based on the signal energy feature provided by multiple sensor nodes, employing particle swarm optimization (PSO). During the target surveillance procedure, sensor nodes are adaptively grouped in a totally distributed manner. Specially, the target motion information is extracted by a forecasting algorithm, which is based on the hidden Markov model (HMM). The forecasting results are utilized to awaken sensor node in the vicinity of future target position. According to the two properties, signal energy feature and residual energy, the sensor nodes decide whether to participate in target detection separately with a fuzzy control approach. Meanwhile, the local routing scheme of data transmission towards the observer is discussed. Experimental results demonstrate the efficiency of energy-aware scheduling of surveillance in wireless multimedia sensor network, where significant energy saving is achieved by the sensor awakening approach and data transmission paths are calculated with low computational complexity.

  3. MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER

    NASA Technical Reports Server (NTRS)

    Barton, R. S.

    1994-01-01

    The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.

  4. Automatic Fault Recognition of Photovoltaic Modules Based on Statistical Analysis of Uav Thermography

    NASA Astrophysics Data System (ADS)

    Kim, D.; Youn, J.; Kim, C.

    2017-08-01

    As a malfunctioning PV (Photovoltaic) cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle). The proposed algorithm uses statistical analysis of thermal intensity (surface temperature) characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images; this verified a detection accuracy of defective panels of 97 % or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.

  5. Modulation of high frequency noise by engine tones of small boats.

    PubMed

    Pollara, Alexander; Sutin, Alexander; Salloum, Hady

    2017-07-01

    The effect of modulation of high frequency ship noise by propeller rotation frequencies is well known. This modulation is observed with the Detection of Envelope Modulation on Noise (DEMON) algorithm. Analysis of the DEMON spectrum allows the revolutions per minute and number of blades of the propeller to be determined. This work shows that the high frequency noise of a small boat can also be modulated by engine frequencies. Prior studies have not reported high frequency noise amplitude modulated at engine frequencies. This modulation is likely produced by bubbles from the engine exhaust system.

  6. Effect of a Noise-Optimized Second-Generation Monoenergetic Algorithm on Image Noise and Conspicuity of Hypervascular Liver Tumors: An In Vitro and In Vivo Study.

    PubMed

    Marin, Daniele; Ramirez-Giraldo, Juan Carlos; Gupta, Sonia; Fu, Wanyi; Stinnett, Sandra S; Mileto, Achille; Bellini, Davide; Patel, Bhavik; Samei, Ehsan; Nelson, Rendon C

    2016-06-01

    The purpose of this study is to investigate whether the reduction in noise using a second-generation monoenergetic algorithm can improve the conspicuity of hypervascular liver tumors on dual-energy CT (DECT) images of the liver. An anthropomorphic liver phantom in three body sizes and iodine-containing inserts simulating hypervascular lesions was imaged with DECT and single-energy CT at various energy levels (80-140 kV). In addition, a retrospective clinical study was performed in 31 patients with 66 hypervascular liver tumors who underwent DECT during the late hepatic arterial phase. Datasets at energy levels ranging from 40 to 80 keV were reconstructed using first- and second-generation monoenergetic algorithms. Noise, tumor-to-liver contrast-to-noise ratio (CNR), and CNR with a noise constraint (CNRNC) set with a maximum noise increase of 50% were calculated and compared among the different reconstructed datasets. The maximum CNR for the second-generation monoenergetic algorithm, which was attained at 40 keV in both phantom and clinical datasets, was statistically significantly higher than the maximum CNR for the first-generation monoenergetic algorithm (p < 0.001) or single-energy CT acquisitions across a wide range of kilovoltage values. With the second-generation monoenergetic algorithm, the optimal CNRNC occurred at 55 keV, corresponding to lower energy levels compared with first-generation algorithm (predominantly at 70 keV). Patient body size did not substantially affect the selection of the optimal energy level to attain maximal CNR and CNRNC using the second-generation monoenergetic algorithm. A noise-optimized second-generation monoenergetic algorithm significantly improves the conspicuity of hypervascular liver tumors.

  7. Page layout analysis and classification for complex scanned documents

    NASA Astrophysics Data System (ADS)

    Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan

    2011-09-01

    A framework for region/zone classification in color and gray-scale scanned documents is proposed in this paper. The algorithm includes modules for extracting text, photo, and strong edge/line regions. Firstly, a text detection module which is based on wavelet analysis and Run Length Encoding (RLE) technique is employed. Local and global energy maps in high frequency bands of the wavelet domain are generated and used as initial text maps. Further analysis using RLE yields a final text map. The second module is developed to detect image/photo and pictorial regions in the input document. A block-based classifier using basis vector projections is employed to identify photo candidate regions. Then, a final photo map is obtained by applying probabilistic model based on Markov random field (MRF) based maximum a posteriori (MAP) optimization with iterated conditional mode (ICM). The final module detects lines and strong edges using Hough transform and edge-linkages analysis, respectively. The text, photo, and strong edge/line maps are combined to generate a page layout classification of the scanned target document. Experimental results and objective evaluation show that the proposed technique has a very effective performance on variety of simple and complex scanned document types obtained from MediaTeam Oulu document database. The proposed page layout classifier can be used in systems for efficient document storage, content based document retrieval, optical character recognition, mobile phone imagery, and augmented reality.

  8. Energy Conservation Curriculum for Secondary and Post-Secondary Students. Module 1: Awareness of the Energy Dilemma.

    ERIC Educational Resources Information Center

    Navarro Coll., Corsicana, TX.

    This module is the first in a series of eleven modules in an energy conservation curriculum for secondary and postsecondary vocational students. It is designed for use by itself, to be integrated with the other ten modules into a program on energy conservation, or to be integrated into conventional vocational courses as a unit of instruction. The…

  9. Cooperative network clustering and task allocation for heterogeneous small satellite network

    NASA Astrophysics Data System (ADS)

    Qin, Jing

    The research of small satellite has emerged as a hot topic in recent years because of its economical prospects and convenience in launching and design. Due to the size and energy constraints of small satellites, forming a small satellite network(SSN) in which all the satellites cooperate with each other to finish tasks is an efficient and effective way to utilize them. In this dissertation, I designed and evaluated a weight based dominating set clustering algorithm, which efficiently organizes the satellites into stable clusters. The traditional clustering algorithms of large monolithic satellite networks, such as formation flying and satellite swarm, are often limited on automatic formation of clusters. Therefore, a novel Distributed Weight based Dominating Set(DWDS) clustering algorithm is designed to address the clustering problems in the stochastically deployed SSNs. Considering the unique features of small satellites, this algorithm is able to form the clusters efficiently and stably. In this algorithm, satellites are separated into different groups according to their spatial characteristics. A minimum dominating set is chosen as the candidate cluster head set based on their weights, which is a weighted combination of residual energy and connection degree. Then the cluster heads admit new neighbors that accept their invitations into the cluster, until the maximum cluster size is reached. Evaluated by the simulation results, in a SSN with 200 to 800 nodes, the algorithm is able to efficiently cluster more than 90% of nodes in 3 seconds. The Deadline Based Resource Balancing (DBRB) task allocation algorithm is designed for efficient task allocations in heterogeneous LEO small satellite networks. In the task allocation process, the dispatcher needs to consider the deadlines of the tasks as well as the residue energy of different resources for best energy utilization. We assume the tasks adopt a Map-Reduce framework, in which a task can consist of multiple subtasks. The DBRB algorithm is deployed on the head node of a cluster. It gathers the status from each cluster member and calculates their Node Importance Factors (NIFs) from the carried resources, residue power and compute capacity. The algorithm calculates the number of concurrent subtasks based on the deadlines, and allocates the subtasks to the nodes according to their NIF values. The simulation results show that when cluster members carry multiple resources, resource are more balanced and rare resources serve longer in DBRB than in the Earliest Deadline First algorithm. We also show that the algorithm performs well in service isolation by serving multiple tasks with different deadlines. Moreover, the average task response time with various cluster size settings is well controlled within deadlines as well. Except non-realtime tasks, small satellites may execute realtime tasks as well. The location-dependent tasks, such as image capturing, data transmission and remote sensing tasks are realtime tasks that are required to be started / finished on specific time. The resource energy balancing algorithm for realtime and non-realtime mixed workload is developed to efficiently schedule the tasks for best system performance. It calculates the residue energy for each resource type and tries to preserve resources and node availability when distributing tasks. Non-realtime tasks can be preempted by realtime tasks to provide better QoS to realtime tasks. I compared the performance of proposed algorithm with a random-priority scheduling algorithm, with only realtime tasks, non-realtime tasks and mixed tasks. It shows the resource energy reservation algorithm outperforms the latter one with both balanced and imbalanced workloads. Although the resource energy balancing task allocation algorithm for mixed workload provides preemption mechanism for realtime tasks, realtime tasks can still fail due to resource exhaustion. For LEO small satellite flies around the earth on stable orbits, the location-dependent realtime tasks can be considered as periodical tasks. Therefore, it is possible to reserve energy for these realtime tasks. The resource energy reservation algorithm preserves energy for the realtime tasks when the execution routine of periodical realtime tasks is known. In order to reserve energy for tasks starting very early in each period that the node does not have enough energy charged, an energy wrapping mechanism is also designed to calculate the residue energy from the previous period. The simulation results show that without energy reservation, realtime task failure rate can reach more than 60% when the workload is highly imbalanced. In contrast, the resource energy reservation produces zero RT task failures and leads to equal or better aggregate system throughput than the non-reservation algorithm. The proposed algorithm also preserves more energy because it avoids task preemption. (Abstract shortened by ProQuest.).

  10. A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Xu, Yiming; Liu, Jun

    2017-01-01

    For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes’ being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network’s best service quality and lifetime. PMID:28106837

  11. A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks.

    PubMed

    Jiang, Peng; Xu, Yiming; Liu, Jun

    2017-01-19

    For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes' being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network's best service quality and lifetime.

  12. Magnetic resonance imaging diffusion tensor tractography: evaluation of anatomic accuracy of different fiber tracking software packages.

    PubMed

    Feigl, Guenther C; Hiergeist, Wolfgang; Fellner, Claudia; Schebesch, Karl-Michael M; Doenitz, Christian; Finkenzeller, Thomas; Brawanski, Alexander; Schlaier, Juergen

    2014-01-01

    Diffusion tensor imaging (DTI)-based tractography has become an integral part of preoperative diagnostic imaging in many neurosurgical centers, and other nonsurgical specialties depend increasingly on DTI tractography as a diagnostic tool. The aim of this study was to analyze the anatomic accuracy of visualized white matter fiber pathways using different, readily available DTI tractography software programs. Magnetic resonance imaging scans of the head of 20 healthy volunteers were acquired using a Siemens Symphony TIM 1.5T scanner and a 12-channel head array coil. The standard settings of the scans in this study were 12 diffusion directions and 5-mm slices. The fornices were chosen as an anatomic structure for the comparative fiber tracking. Identical data sets were loaded into nine different fiber tracking packages that used different algorithms. The nine software packages and algorithms used were NeuroQLab (modified tensor deflection [TEND] algorithm), Sörensen DTI task card (modified streamline tracking technique algorithm), Siemens DTI module (modified fourth-order Runge-Kutta algorithm), six different software packages from Trackvis (interpolated streamline algorithm, modified FACT algorithm, second-order Runge-Kutta algorithm, Q-ball [FACT algorithm], tensorline algorithm, Q-ball [second-order Runge-Kutta algorithm]), DTI Query (modified streamline tracking technique algorithm), Medinria (modified TEND algorithm), Brainvoyager (modified TEND algorithm), DTI Studio modified FACT algorithm, and the BrainLab DTI module based on the modified Runge-Kutta algorithm. Three examiners (a neuroradiologist, a magnetic resonance imaging physicist, and a neurosurgeon) served as examiners. They were double-blinded with respect to the test subject and the fiber tracking software used in the presented images. Each examiner evaluated 301 images. The examiners were instructed to evaluate screenshots from the different programs based on two main criteria: (i) anatomic accuracy of the course of the displayed fibers and (ii) number of fibers displayed outside the anatomic boundaries. The mean overall grade for anatomic accuracy was 2.2 (range, 1.1-3.6) with a standard deviation (SD) of 0.9. The mean overall grade for incorrectly displayed fibers was 2.5 (range, 1.6-3.5) with a SD of 0.6. The mean grade of the overall program ranking was 2.3 with a SD of 0.6. The overall mean grade of the program ranked number one (NeuroQLab) was 1.7 (range, 1.5-2.8). The mean overall grade of the program ranked last (BrainLab iPlan Cranial 2.6 DTI Module) was 3.3 (range, 1.7-4). The difference between the mean grades of these two programs was statistically highly significant (P < 0.0001). There was no statistically significant difference between the programs ranked 1-3: NeuroQLab, Sörensen DTI Task Card, and Siemens DTI module. The results of this study show that there is a statistically significant difference in the anatomic accuracy of the tested DTI fiber tracking programs. Although incorrectly displayed fibers could lead to wrong conclusions in the neurosciences field, which relies heavily on this noninvasive imaging technique, incorrectly displayed fibers in neurosurgery could lead to surgical decisions potentially harmful for the patient if used without intraoperative cortical stimulation. DTI fiber tracking presents a valuable noninvasive preoperative imaging tool, which requires further validation after important standardization of the acquisition and processing techniques currently available. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Crosstalk mitigation using pilot assisted least square algorithm in OFDM-carrying orbital angular momentum multiplexed free-space-optical communication links.

    PubMed

    Sun, Tengfen; Liu, Minwen; Li, Yingchun; Wang, Min

    2017-10-16

    In this paper, we experimentally investigate the performance of crosstalk mitigation for 16-ary quadrature amplitude modulation orthogonal frequency division multiplexing (16QAM-OFDM) signals carrying orbital angular momentum (OAM) multiplexed free-space-optical communication (FSO) links using the pilot assisted Least Square (LS) algorithm. At the demodulating spatial light modulators (SLMs), we launch the distorted phase holograms which have the information of atmospheric turbulence obeying the modified Hill spectrum. And crosstalk can be introduced by these holograms with the experimental verification. The pilot assisted LS algorithm can efficiently improve the quality of system performance, the points of constellations get closer to the reference points and around two orders of magnitude improvement of bit-error rate (BER) is obtained.

  14. Hearing through the noise: Biologically inspired noise reduction

    NASA Astrophysics Data System (ADS)

    Lee, Tyler Paul

    Vocal communication in the natural world demands that a listener perform a remarkably complicated task in real-time. Vocalizations mix with all other sounds in the environment as they travel to the listener, arriving as a jumbled low-dimensional signal. A listener must then use this signal to extract the structure corresponding to individual sound sources. How this computation is implemented in the brain remains poorly understood, yet an accurate description of such mechanisms would impact a variety of medical and technological applications of sound processing. In this thesis, I describe initial work on how neurons in the secondary auditory cortex of the Zebra Finch extract song from naturalistic background noise. I then build on our understanding of the function of these neurons by creating an algorithm that extracts speech from natural background noise using spectrotemporal modulations. The algorithm, implemented as an artificial neural network, can be flexibly applied to any class of signal or noise and performs better than an optimal frequency-based noise reduction algorithm for a variety of background noises and signal-to-noise ratios. One potential drawback to using spectrotemporal modulations for noise reduction, though, is that analyzing the modulations present in an ongoing sound requires a latency set by the slowest temporal modulation computed. The algorithm avoids this problem by reducing noise predictively, taking advantage of the large amount of temporal structure present in natural sounds. This predictive denoising has ties to recent work suggesting that the auditory system uses attention to focus on predicted regions of spectrotemporal space when performing auditory scene analysis.

  15. New Operational Algorithms for Particle Data from Low-Altitude Polar-Orbiting Satellites

    NASA Astrophysics Data System (ADS)

    Machol, J. L.; Green, J. C.; Rodriguez, J. V.; Onsager, T. G.; Denig, W. F.

    2010-12-01

    As part of the algorithm development effort started under the former National Polar-orbiting Operational Environmental Satellite System (NPOESS) program, the NOAA Space Weather Prediction Center (SWPC) is developing operational algorithms for the next generation of low-altitude polar-orbiting weather satellites. This presentation reviews the two new algorithms on which SWPC has focused: Energetic Ions (EI) and Auroral Energy Deposition (AED). Both algorithms take advantage of the improved performance of the Space Environment Monitor - Next (SEM-N) sensors over earlier SEM instruments flown on NOAA Polar Orbiting Environmental Satellites (POES). The EI algorithm iterates a piecewise power law fit in order to derive a differential energy flux spectrum for protons with energies from 10-250 MeV. The algorithm provides the data in physical units (MeV/cm2-s-str-keV) instead of just counts/s as was done in the past, making the data generally more useful and easier to integrate into higher level products. The AED algorithm estimates the energy flux deposited into the atmosphere by precipitating low- and medium-energy charged particles. The AED calculations include particle pitch-angle distributions, information that was not available from POES. This presentation also describes methods that we are evaluating for creating higher level products that would specify the global particle environment based on real time measurements.

  16. T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors

    PubMed Central

    Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun

    2016-01-01

    Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction. PMID:27399722

  17. Systems and methods for compensating for electrical converter nonlinearities

    DOEpatents

    Perisic, Milun; Ransom, Ray M.; Kajouke, Lateef A.

    2013-06-18

    Systems and methods are provided for delivering energy from an input interface to an output interface. An electrical system includes an input interface, an output interface, an energy conversion module coupled between the input interface and the output interface, and a control module. The control module determines a duty cycle control value for operating the energy conversion module to produce a desired voltage at the output interface. The control module determines an input power error at the input interface and adjusts the duty cycle control value in a manner that is influenced by the input power error, resulting in a compensated duty cycle control value. The control module operates switching elements of the energy conversion module to deliver energy to the output interface with a duty cycle that is influenced by the compensated duty cycle control value.

  18. Aerosol Optical Retrieval and Surface Reflectance from Airborne Remote Sensing Data over Land

    PubMed Central

    Bassani, Cristiana; Cavalli, Rosa Maria; Pignatti, Stefano

    2010-01-01

    Quantitative analysis of atmospheric optical properties and surface reflectance can be performed by applying radiative transfer theory in the Atmosphere-Earth coupled system, for the atmospheric correction of hyperspectral remote sensing data. This paper describes a new physically-based algorithm to retrieve the aerosol optical thickness at 550nm (τ550) and the surface reflectance (ρ) from airborne acquired data in the atmospheric window of the Visible and Near-Infrared (VNIR) range. The algorithm is realized in two modules. Module A retrieves τ550 with a minimization algorithm, then Module B retrieves the surface reflectance ρ for each pixel of the image. The method was tested on five remote sensing images acquired by an airborne sensor under different geometric conditions to evaluate the reliability of the method. The results, τ550 and ρ, retrieved from each image were validated with field data contemporaneously acquired by a sun-sky radiometer and a spectroradiometer, respectively. Good correlation index, r, and low root mean square deviations, RMSD, were obtained for the τ550 retrieved by Module A (r2 = 0.75, RMSD = 0.08) and the ρ retrieved by Module B (r2 ≤ 0.9, RMSD ≤ 0.003). Overall, the results are encouraging, indicating that the method is reliable for optical atmospheric studies and the atmospheric correction of airborne hyperspectral images. The method does not require additional at-ground measurements about at-ground reflectance of the reference pixel and aerosol optical thickness. PMID:22163558

  19. A high throughput architecture for a low complexity soft-output demapping algorithm

    NASA Astrophysics Data System (ADS)

    Ali, I.; Wasenmüller, U.; Wehn, N.

    2015-11-01

    Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.

  20. An integrated model for the assessment of global water resources - Part 2: Anthropogenic activities modules and assessments

    NASA Astrophysics Data System (ADS)

    Hanasaki, N.; Kanae, S.; Oki, T.; Shirakawa, N.

    2007-10-01

    To assess global water resources from the perspective of subannual variation in water resources and water use, an integrated water resources model was developed. In a companion report, we presented the global meteorological forcing input used to drive the model and two natural hydrological cycle modules, namely, the land surface hydrology module and the river routing module. Here, we present the remaining four modules, which represent anthropogenic activities: a crop growth module, a reservoir operation module, an environmental flow requirement module, and an anthropogenic withdrawal module. In addition, we discuss the results of a global water resources assessment using the integrated model. The crop growth module is a relatively simple model based on heat unit theory and potential biomass and harvest index concepts. The performance of the crop growth module was examined extensively because agricultural water comprises approximately 70% of total water withdrawal in the world. The estimated crop calendar showed good agreement with earlier reports for wheat, maize, and rice in major countries of production. The estimated irrigation water withdrawal also showed fair agreement with country statistics, but tended to underestimate countries in the Asian monsoon region. In the reservoir operation module, 452 major reservoirs with more than 1 km³ each of storage capacity store and release water according to their own rules of operation. Operating rules were determined for each reservoir using an algorithm that used currently available global data such as reservoir storage capacity, intended purposes, simulated inflow, and water demand in the lower reaches. The environmental flow requirement module was newly developed based on case studies from around the world. The integrated model closes both energy and water balances on land surfaces. Global water resources were assessed on a subannual basis using a newly devised index that locates water-stressed regions that were undetected in earlier studies. These regions, which are indicated by a gap in the subannual distribution of water resources and water use, include the Sahel, the Asian monsoon region, and southern Africa. The integrated model is applicable to assess various global environmental projections such as climate change.

  1. The openGL visualization of the 2D parallel FDTD algorithm

    NASA Astrophysics Data System (ADS)

    Walendziuk, Wojciech

    2005-02-01

    This paper presents a way of visualization of a two-dimensional version of a parallel algorithm of the FDTD method. The visualization module was created on the basis of the OpenGL graphic standard with the use of the GLUT interface. In addition, the work includes the results of the efficiency of the parallel algorithm in the form of speedup charts.

  2. Predictive Scheduling for Electric Vehicles Considering Uncertainty of Load and User Behaviors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bin; Huang, Rui; Wang, Yubo

    2016-05-02

    Un-coordinated Electric Vehicle (EV) charging can create unexpected load in local distribution grid, which may degrade the power quality and system reliability. The uncertainty of EV load, user behaviors and other baseload in distribution grid, is one of challenges that impedes optimal control for EV charging problem. Previous researches did not fully solve this problem due to lack of real-world EV charging data and proper stochastic model to describe these behaviors. In this paper, we propose a new predictive EV scheduling algorithm (PESA) inspired by Model Predictive Control (MPC), which includes a dynamic load estimation module and a predictive optimizationmore » module. The user-related EV load and base load are dynamically estimated based on the historical data. At each time interval, the predictive optimization program will be computed for optimal schedules given the estimated parameters. Only the first element from the algorithm outputs will be implemented according to MPC paradigm. Current-multiplexing function in each Electric Vehicle Supply Equipment (EVSE) is considered and accordingly a virtual load is modeled to handle the uncertainties of future EV energy demands. This system is validated by the real-world EV charging data collected on UCLA campus and the experimental results indicate that our proposed model not only reduces load variation up to 40% but also maintains a high level of robustness. Finally, IEC 61850 standard is utilized to standardize the data models involved, which brings significance to more reliable and large-scale implementation.« less

  3. Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    2000-01-01

    This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.

  4. KIRMES: kernel-based identification of regulatory modules in euchromatic sequences.

    PubMed

    Schultheiss, Sebastian J; Busch, Wolfgang; Lohmann, Jan U; Kohlbacher, Oliver; Rätsch, Gunnar

    2009-08-15

    Understanding transcriptional regulation is one of the main challenges in computational biology. An important problem is the identification of transcription factor (TF) binding sites in promoter regions of potential TF target genes. It is typically approached by position weight matrix-based motif identification algorithms using Gibbs sampling, or heuristics to extend seed oligos. Such algorithms succeed in identifying single, relatively well-conserved binding sites, but tend to fail when it comes to the identification of combinations of several degenerate binding sites, as those often found in cis-regulatory modules. We propose a new algorithm that combines the benefits of existing motif finding with the ones of support vector machines (SVMs) to find degenerate motifs in order to improve the modeling of regulatory modules. In experiments on microarray data from Arabidopsis thaliana, we were able to show that the newly developed strategy significantly improves the recognition of TF targets. The python source code (open source-licensed under GPL), the data for the experiments and a Galaxy-based web service are available at http://www.fml.mpg.de/raetsch/suppl/kirmes/.

  5. Indoor high precision three-dimensional positioning system based on visible light communication using modified genetic algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Guan, Weipeng; Li, Simin; Wu, Yuxiang

    2018-04-01

    To improve the precision of indoor positioning and actualize three-dimensional positioning, a reversed indoor positioning system based on visible light communication (VLC) using genetic algorithm (GA) is proposed. In order to solve the problem of interference between signal sources, CDMA modulation is used. Each light-emitting diode (LED) in the system broadcasts a unique identity (ID) code using CDMA modulation. Receiver receives mixed signal from every LED reference point, by the orthogonality of spreading code in CDMA modulation, ID information and intensity attenuation information from every LED can be obtained. According to positioning principle of received signal strength (RSS), the coordinate of the receiver can be determined. Due to system noise and imperfection of device utilized in the system, distance between receiver and transmitters will deviate from the real value resulting in positioning error. By introducing error correction factors to global parallel search of genetic algorithm, coordinates of the receiver in three-dimensional space can be determined precisely. Both simulation results and experimental results show that in practical application scenarios, the proposed positioning system can realize high precision positioning service.

  6. Using experimental data to test an n -body dynamical model coupled with an energy-based clusterization algorithm at low incident energies

    NASA Astrophysics Data System (ADS)

    Kumar, Rohit; Puri, Rajeev K.

    2018-03-01

    Employing the quantum molecular dynamics (QMD) approach for nucleus-nucleus collisions, we test the predictive power of the energy-based clusterization algorithm, i.e., the simulating annealing clusterization algorithm (SACA), to describe the experimental data of charge distribution and various event-by-event correlations among fragments. The calculations are constrained into the Fermi-energy domain and/or mildly excited nuclear matter. Our detailed study spans over different system masses, and system-mass asymmetries of colliding partners show the importance of the energy-based clusterization algorithm for understanding multifragmentation. The present calculations are also compared with the other available calculations, which use one-body models, statistical models, and/or hybrid models.

  7. A data driven approach for condition monitoring of wind turbine blade using vibration signals through best-first tree algorithm and functional trees algorithm: A comparative study.

    PubMed

    Joshuva, A; Sugumaran, V

    2017-03-01

    Wind energy is one of the important renewable energy resources available in nature. It is one of the major resources for production of energy because of its dependability due to the development of the technology and relatively low cost. Wind energy is converted into electrical energy using rotating blades. Due to environmental conditions and large structure, the blades are subjected to various vibration forces that may cause damage to the blades. This leads to a liability in energy production and turbine shutdown. The downtime can be reduced when the blades are diagnosed continuously using structural health condition monitoring. These are considered as a pattern recognition problem which consists of three phases namely, feature extraction, feature selection, and feature classification. In this study, statistical features were extracted from vibration signals, feature selection was carried out using a J48 decision tree algorithm and feature classification was performed using best-first tree algorithm and functional trees algorithm. The better algorithm is suggested for fault diagnosis of wind turbine blade. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Low-cost, digital lock-in module with external reference for coating glass transmission/reflection spectrophotometer

    NASA Astrophysics Data System (ADS)

    Alonso, R.; Villuendas, F.; Borja, J.; Barragán, L. A.; Salinas, I.

    2003-05-01

    A versatile, low-cost, digital signal processor (DSP) based lock-in module with external reference is described. This module is used to implement an industrial spectrophotometer for measuring spectral transmission and reflection of automotive and architectonic coating glasses over the ultraviolet, visible and near-infrared wavelength range. The light beams are modulated with an optical chopper. A digital phase-locked loop (DPLL) is used to lock the lock-in to the chop frequency. The lock-in rejects the ambient radiation and permits the spectrophotometer to work in the presence of ambient light. The algorithm that implements the dual lock-in and the DPLL in the DSP56002 evaluation module from Motorola is described. The use of a DSP allows implementation of the lock-in and DPLL by software, which gives flexibility and programmability to the system. Lock-in module cost, under 300 euro, is an important parameter taking into account that two modules are used in the system. Besides, the algorithms implemented in this DSP can be directly implemented in the latest DSP generations. The DPLL performance and the spectrophotometer are characterized. Capture and lock DPLL ranges have been measured and checked to be greater than the chop frequency drifts. The lock-in measured frequency response shows that the lock-in performs as theoretically predicted.

  9. An improved PSO-SVM model for online recognition defects in eddy current testing

    NASA Astrophysics Data System (ADS)

    Liu, Baoling; Hou, Dibo; Huang, Pingjie; Liu, Banteng; Tang, Huayi; Zhang, Wubo; Chen, Peihua; Zhang, Guangxin

    2013-12-01

    Accurate and rapid recognition of defects is essential for structural integrity and health monitoring of in-service device using eddy current (EC) non-destructive testing. This paper introduces a novel model-free method that includes three main modules: a signal pre-processing module, a classifier module and an optimisation module. In the signal pre-processing module, a kind of two-stage differential structure is proposed to suppress the lift-off fluctuation that could contaminate the EC signal. In the classifier module, multi-class support vector machine (SVM) based on one-against-one strategy is utilised for its good accuracy. In the optimisation module, the optimal parameters of classifier are obtained by an improved particle swarm optimisation (IPSO) algorithm. The proposed IPSO technique can improve convergence performance of the primary PSO through the following strategies: nonlinear processing of inertia weight, introductions of the black hole and simulated annealing model with extremum disturbance. The good generalisation ability of the IPSO-SVM model has been validated through adding additional specimen into the testing set. Experiments show that the proposed algorithm can achieve higher recognition accuracy and efficiency than other well-known classifiers and the superiorities are more obvious with less training set, which contributes to online application.

  10. Low-energy electron inelastic mean free paths for liquid water

    NASA Astrophysics Data System (ADS)

    Nguyen-Truong, Hieu T.

    2018-04-01

    We improve the Mermin–Penn algorithm (MPA) for determining the energy loss function (ELF) within the dielectric formalism. The present algorithm is applicable not only to real metals, but also to materials that have an energy gap in the excitation spectrum. Applying the improved MPA to liquid water, we show that the present algorithm is able to address the ELF overestimation at the energy gap, and the calculated results are in good agreement with experimental data.

  11. An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks

    PubMed Central

    Mustapha, Ibrahim; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A.; Sali, Aduwati; Mohamad, Hafizal

    2015-01-01

    It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach. PMID:26287191

  12. An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks.

    PubMed

    Mustapha, Ibrahim; Mohd Ali, Borhanuddin; Rasid, Mohd Fadlee A; Sali, Aduwati; Mohamad, Hafizal

    2015-08-13

    It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach.

  13. Modulation-format-free and automatic bias control for optical IQ modulators based on dither-correlation detection.

    PubMed

    Li, Xiaolei; Deng, Lei; Chen, Xiaoman; Cheng, Mengfan; Fu, Songnian; Tang, Ming; Liu, Deming

    2017-04-17

    A novel automatic bias control (ABC) method for optical in-phase and quadrature (IQ) modulator is proposed and experimentally demonstrated. In the proposed method, two different low frequency sine wave dither signals are generated and added on to the I/Q bias signal respectively. Instead of power monitoring of the harmonics of the dither signal, dither-correlation detection is proposed and used to adjust the bias voltages of the optical IQ modulator. By this way, not only frequency spectral analysis isn't required but also the directional bias adjustment could be realized, resulting in the decrease of algorithm complexity and the growth of convergence rate of ABC algorithm. The results show that the sensitivity of the proposed ABC method outperforms that of the traditional dither frequency monitoring method. Moreover, the proposed ABC method is proved to be modulation-format-free, and the transmission penalty caused by this method for both 10 Gb/s optical QPSK and 17.9 Gb/s optical 16QAM-OFDM signal transmission are negligible in our experiment.

  14. Modular Aero-Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Parker, Khary I.; Guo, Ten-Huei

    2006-01-01

    The Modular Aero-Propulsion System Simulation (MAPSS) is a graphical simulation environment designed for the development of advanced control algorithms and rapid testing of these algorithms on a generic computational model of a turbofan engine and its control system. MAPSS is a nonlinear, non-real-time simulation comprising a Component Level Model (CLM) module and a Controller-and-Actuator Dynamics (CAD) module. The CLM module simulates the dynamics of engine components at a sampling rate of 2,500 Hz. The controller submodule of the CAD module simulates a digital controller, which has a typical update rate of 50 Hz. The sampling rate for the actuators in the CAD module is the same as that of the CLM. MAPSS provides a graphical user interface that affords easy access to engine-operation, engine-health, and control parameters; is used to enter such input model parameters as power lever angle (PLA), Mach number, and altitude; and can be used to change controller and engine parameters. Output variables are selectable by the user. Output data as well as any changes to constants and other parameters can be saved and reloaded into the GUI later.

  15. Multiobjective Particle Swarm Optimization for the optimal design of photovoltaic grid-connected systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kornelakis, Aris

    2010-12-15

    Particle Swarm Optimization (PSO) is a highly efficient evolutionary optimization algorithm. In this paper a multiobjective optimization algorithm based on PSO applied to the optimal design of photovoltaic grid-connected systems (PVGCSs) is presented. The proposed methodology intends to suggest the optimal number of system devices and the optimal PV module installation details, such that the economic and environmental benefits achieved during the system's operational lifetime period are both maximized. The objective function describing the economic benefit of the proposed optimization process is the lifetime system's total net profit which is calculated according to the method of the Net Present Valuemore » (NPV). The second objective function, which corresponds to the environmental benefit, equals to the pollutant gas emissions avoided due to the use of the PVGCS. The optimization's decision variables are the optimal number of the PV modules, the PV modules optimal tilt angle, the optimal placement of the PV modules within the available installation area and the optimal distribution of the PV modules among the DC/AC converters. (author)« less

  16. Obstacle Detection Algorithms for Aircraft Navigation: Performance Characterization of Obstacle Detection Algorithms for Aircraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Coraor, Lee

    2000-01-01

    The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.

  17. Evolving cell models for systems and synthetic biology.

    PubMed

    Cao, Hongqing; Romero-Campero, Francisco J; Heeb, Stephan; Cámara, Miguel; Krasnogor, Natalio

    2010-03-01

    This paper proposes a new methodology for the automated design of cell models for systems and synthetic biology. Our modelling framework is based on P systems, a discrete, stochastic and modular formal modelling language. The automated design of biological models comprising the optimization of the model structure and its stochastic kinetic constants is performed using an evolutionary algorithm. The evolutionary algorithm evolves model structures by combining different modules taken from a predefined module library and then it fine-tunes the associated stochastic kinetic constants. We investigate four alternative objective functions for the fitness calculation within the evolutionary algorithm: (1) equally weighted sum method, (2) normalization method, (3) randomly weighted sum method, and (4) equally weighted product method. The effectiveness of the methodology is tested on four case studies of increasing complexity including negative and positive autoregulation as well as two gene networks implementing a pulse generator and a bandwidth detector. We provide a systematic analysis of the evolutionary algorithm's results as well as of the resulting evolved cell models.

  18. An algorithm for modularization of MAPK and calcium signaling pathways: comparative analysis among different species.

    PubMed

    Nayak, Losiana; De, Rajat K

    2007-12-01

    Signaling pathways are large complex biochemical networks. It is difficult to analyze the underlying mechanism of such networks as a whole. In the present article, we have proposed an algorithm for modularization of signal transduction pathways. Unlike studying a signaling pathway as a whole, this enables one to study the individual modules (less complex smaller units) easily and hence to study the entire pathway better. A comparative study of modules belonging to different species (for the same signaling pathway) has been made, which gives an overall idea about development of the signaling pathways over the taken set of species of calcium and MAPK signaling pathways. The superior performance, in terms of biological significance, of the proposed algorithm over an existing community finding algorithm of Newman [Newman MEJ. Modularity and community structure in networks. Proc Natl Acad Sci USA 2006;103(23):8577-82] has been demonstrated using the aforesaid pathways of H. sapiens.

  19. Assessment of metal ion concentration in water with structured feature selection.

    PubMed

    Naula, Pekka; Airola, Antti; Pihlasalo, Sari; Montoya Perez, Ileana; Salakoski, Tapio; Pahikkala, Tapio

    2017-10-01

    We propose a cost-effective system for the determination of metal ion concentration in water, addressing a central issue in water resources management. The system combines novel luminometric label array technology with a machine learning algorithm that selects a minimal number of array reagents (modulators) and liquid sample dilutions, such that enable accurate quantification. The algorithm is able to identify the optimal modulators and sample dilutions leading to cost reductions since less manual labour and resources are needed. Inferring the ion detector involves a unique type of a structured feature selection problem, which we formalize in this paper. We propose a novel Cartesian greedy forward feature selection algorithm for solving the problem. The novel algorithm was evaluated in the concentration assessment of five metal ions and the performance was compared to two known feature selection approaches. The results demonstrate that the proposed system can assist in lowering the costs with minimal loss in accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Meal Detection in Patients With Type 1 Diabetes: A New Module for the Multivariable Adaptive Artificial Pancreas Control System.

    PubMed

    Turksoy, Kamuran; Samadi, Sediqeh; Feng, Jianyuan; Littlejohn, Elizabeth; Quinn, Laurie; Cinar, Ali

    2016-01-01

    A novel meal-detection algorithm is developed based on continuous glucose measurements. Bergman's minimal model is modified and used in an unscented Kalman filter for state estimations. The estimated rate of appearance of glucose is used for meal detection. Data from nine subjects are used to assess the performance of the algorithm. The results indicate that the proposed algorithm works successfully with high accuracy. The average change in glucose levels between the meals and the detection points is 16(±9.42) [mg/dl] for 61 successfully detected meals and snacks. The algorithm is developed as a new module of an integrated multivariable adaptive artificial pancreas control system. Meal detection with the proposed method is used to administer insulin boluses and prevent most of postprandial hyperglycemia without any manual meal announcements. A novel meal bolus calculation method is proposed and tested with the UVA/Padova simulator. The results indicate significant reduction in hyperglycemia.

  1. Scheduling for energy and reliability management on multiprocessor real-time systems

    NASA Astrophysics Data System (ADS)

    Qi, Xuan

    Scheduling algorithms for multiprocessor real-time systems have been studied for years with many well-recognized algorithms proposed. However, it is still an evolving research area and many problems remain open due to their intrinsic complexities. With the emergence of multicore processors, it is necessary to re-investigate the scheduling problems and design/develop efficient algorithms for better system utilization, low scheduling overhead, high energy efficiency, and better system reliability. Focusing cluster schedulings with optimal global schedulers, we study the utilization bound and scheduling overhead for a class of cluster-optimal schedulers. Then, taking energy/power consumption into consideration, we developed energy-efficient scheduling algorithms for real-time systems, especially for the proliferating embedded systems with limited energy budget. As the commonly deployed energy-saving technique (e.g. dynamic voltage frequency scaling (DVFS)) will significantly affect system reliability, we study schedulers that have intelligent mechanisms to recuperate system reliability to satisfy the quality assurance requirements. Extensive simulation is conducted to evaluate the performance of the proposed algorithms on reduction of scheduling overhead, energy saving, and reliability improvement. The simulation results show that the proposed reliability-aware power management schemes could preserve the system reliability while still achieving substantial energy saving.

  2. A novel material detection algorithm based on 2D GMM-based power density function and image detail addition scheme in dual energy X-ray images.

    PubMed

    Pourghassem, Hossein

    2012-01-01

    Material detection is a vital need in dual energy X-ray luggage inspection systems at security of airport and strategic places. In this paper, a novel material detection algorithm based on statistical trainable models using 2-Dimensional power density function (PDF) of three material categories in dual energy X-ray images is proposed. In this algorithm, the PDF of each material category as a statistical model is estimated from transmission measurement values of low and high energy X-ray images by Gaussian Mixture Models (GMM). Material label of each pixel of object is determined based on dependency probability of its transmission measurement values in the low and high energy to PDF of three material categories (metallic, organic and mixed materials). The performance of material detection algorithm is improved by a maximum voting scheme in a neighborhood of image as a post-processing stage. Using two background removing and denoising stages, high and low energy X-ray images are enhanced as a pre-processing procedure. For improving the discrimination capability of the proposed material detection algorithm, the details of the low and high energy X-ray images are added to constructed color image which includes three colors (orange, blue and green) for representing the organic, metallic and mixed materials. The proposed algorithm is evaluated on real images that had been captured from a commercial dual energy X-ray luggage inspection system. The obtained results show that the proposed algorithm is effective and operative in detection of the metallic, organic and mixed materials with acceptable accuracy.

  3. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    PubMed Central

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  4. Energy-driven scheduling algorithm for nanosatellite energy harvesting maximization

    NASA Astrophysics Data System (ADS)

    Slongo, L. K.; Martínez, S. V.; Eiterer, B. V. B.; Pereira, T. G.; Bezerra, E. A.; Paiva, K. V.

    2018-06-01

    The number of tasks that a satellite may execute in orbit is strongly related to the amount of energy its Electrical Power System (EPS) is able to harvest and to store. The manner the stored energy is distributed within the satellite has also a great impact on the CubeSat's overall efficiency. Most CubeSat's EPS do not prioritize energy constraints in their formulation. Unlike that, this work proposes an innovative energy-driven scheduling algorithm based on energy harvesting maximization policy. The energy harvesting circuit is mathematically modeled and the solar panel I-V curves are presented for different temperature and irradiance levels. Considering the models and simulations, the scheduling algorithm is designed to keep solar panels working close to their maximum power point by triggering tasks in the appropriate form. Tasks execution affects battery voltage, which is coupled to the solar panels through a protection circuit. A software based Perturb and Observe strategy allows defining the tasks to be triggered. The scheduling algorithm is tested in FloripaSat, which is an 1U CubeSat. A test apparatus is proposed to emulate solar irradiance variation, considering the satellite movement around the Earth. Tests have been conducted to show that the scheduling algorithm improves the CubeSat energy harvesting capability by 4.48% in a three orbit experiment and up to 8.46% in a single orbit cycle in comparison with the CubeSat operating without the scheduling algorithm.

  5. Blind adaptive equalization of polarization-switched QPSK modulation.

    PubMed

    Millar, David S; Savory, Seb J

    2011-04-25

    Coherent detection in combination with digital signal processing has recently enabled significant progress in the capacity of optical communications systems. This improvement has enabled detection of optimum constellations for optical signals in four dimensions. In this paper, we propose and investigate an algorithm for the blind adaptive equalization of one such modulation format: polarization-switched quaternary phase shift keying (PS-QPSK). The proposed algorithm, which includes both blind initialization and adaptation of the equalizer, is found to be insensitive to the input polarization state and demonstrates highly robust convergence in the presence of PDL, DGD and polarization rotation.

  6. Investigation of reliability indicators of information analysis systems based on Markov’s absorbing chain model

    NASA Astrophysics Data System (ADS)

    Gilmanshin, I. R.; Kirpichnikov, A. P.

    2017-09-01

    In the result of study of the algorithm of the functioning of the early detection module of excessive losses, it is proven the ability to model it by using absorbing Markov chains. The particular interest is in the study of probability characteristics of early detection module functioning algorithm of losses in order to identify the relationship of indicators of reliability of individual elements, or the probability of occurrence of certain events and the likelihood of transmission of reliable information. The identified relations during the analysis allow to set thresholds reliability characteristics of the system components.

  7. Design and dosimetry of a few leaf electron collimator for energy modulated electron therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Yahya, Khalid; Verhaegen, Frank; Seuntjens, Jan

    2007-12-15

    Despite the capability of energy modulated electron therapy (EMET) to achieve highly conformal dose distributions in superficial targets it has not been widely implemented due to problems inherent in electron beam radiotherapy such as planning dosimetry accuracy, and verification as well as a lack of systems for automated delivery. In previous work we proposed a novel technique to deliver EMET using an automated 'few leaf electron collimator' (FLEC) that consists of four motor-driven leaves fit in a standard clinical electron beam applicator. Integrated with a Monte Carlo based optimization algorithm that utilizes patient-specific dose kernels, a treatment delivery was incorporatedmore » within the linear accelerator operation. The FLEC was envisioned to work as an accessory tool added to the clinical accelerator. In this article the design and construction of the FLEC prototype that match our compact design goals are presented. It is controlled using an in-house developed EMET controller. The structure of the software and the hardware characteristics of the EMET controller are demonstrated. Using a parallel plate ionization chamber, output measurements were obtained to validate the Monte Carlo calculations for a range of fields with different energies and sizes. Further verifications were also performed for comparing 1-D and 2-D dose distributions using energy independent radiochromic films. Comparisons between Monte Carlo calculations and measurements of complex intensity map deliveries show an overall agreement to within {+-}3%. This work confirms our design objectives of the FLEC that allow for automated delivery of EMET. Furthermore, the Monte Carlo dose calculation engine required for EMET planning was validated. The result supports the potential of the prototype FLEC for the planning and delivery of EMET.« less

  8. Smart sensing to drive real-time loads scheduling algorithm in a domotic architecture

    NASA Astrophysics Data System (ADS)

    Santamaria, Amilcare Francesco; Raimondo, Pierfrancesco; De Rango, Floriano; Vaccaro, Andrea

    2014-05-01

    Nowadays the focus on power consumption represent a very important factor regarding the reduction of power consumption with correlated costs and the environmental sustainability problems. Automatic control load based on power consumption and use cycle represents the optimal solution to costs restraint. The purpose of these systems is to modulate the power request of electricity avoiding an unorganized work of the loads, using intelligent techniques to manage them based on real time scheduling algorithms. The goal is to coordinate a set of electrical loads to optimize energy costs and consumptions based on the stipulated contract terms. The proposed algorithm use two new main notions: priority driven loads and smart scheduling loads. The priority driven loads can be turned off (stand by) according to a priority policy established by the user if the consumption exceed a defined threshold, on the contrary smart scheduling loads are scheduled in a particular way to don't stop their Life Cycle (LC) safeguarding the devices functions or allowing the user to freely use the devices without the risk of exceeding the power threshold. The algorithm, using these two kind of notions and taking into account user requirements, manages loads activation and deactivation allowing the completion their operation cycle without exceeding the consumption threshold in an off-peak time range according to the electricity fare. This kind of logic is inspired by industrial lean manufacturing which focus is to minimize any kind of power waste optimizing the available resources.

  9. Can SNOMED CT be squeezed without losing its shape?

    PubMed

    López-García, Pablo; Schulz, Stefan

    2016-09-21

    In biomedical applications where the size and complexity of SNOMED CT become problematic, using a smaller subset that can act as a reasonable substitute is usually preferred. In a special class of use cases-like ontology-based quality assurance, or when performing scaling experiments for real-time performance-it is essential that modules show a similar shape than SNOMED CT in terms of concept distribution per sub-hierarchy. Exactly how to extract such balanced modules remains unclear, as most previous work on ontology modularization has focused on other problems. In this study, we investigate to what extent extracting balanced modules that preserve the original shape of SNOMED CT is possible, by presenting and evaluating an iterative algorithm. We used a graph-traversal modularization approach based on an input signature. To conform to our definition of a balanced module, we implemented an iterative algorithm that carefully bootstraped and dynamically adjusted the signature at each step. We measured the error for each sub-hierarchy and defined convergence as a residual sum of squares <1. Using 2000 concepts as an initial signature, our algorithm converged after seven iterations and extracted a module 4.7 % the size of SNOMED CT. Seven sub-hierarhies were either over or under-represented within a range of 1-8 %. Our study shows that balanced modules from large terminologies can be extracted using ontology graph-traversal modularization techniques under certain conditions: that the process is repeated a number of times, the input signature is dynamically adjusted in each iteration, and a moderate under/over-representation of some hierarchies is tolerated. In the case of SNOMED CT, our results conclusively show that it can be squeezed to less than 5 % of its size without any sub-hierarchy losing its shape more than 8 %, which is likely sufficient in most use cases.

  10. Maximum likelihood positioning algorithm for high-resolution PET scanners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gross-Weege, Nicolas, E-mail: nicolas.gross-weege@pmi.rwth-aachen.de, E-mail: schulz@pmi.rwth-aachen.de; Schug, David; Hallen, Patrick

    2016-06-15

    Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods:more » The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II {sup D} PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML algorithm is less prone to missing channel information. A likelihood filter visually improved the image quality, i.e., the peak-to-valley increased up to a factor of 3 for 2-mm-diameter phantom rods by rejecting 87% of the coincidences. A relative improvement of the energy resolution of up to 12.8% was also measured rejecting 91% of the coincidences. Conclusions: The developed ML algorithm increases the sensitivity by correctly handling missing channel information without influencing energy resolution or image quality. Furthermore, the authors showed that energy resolution and image quality can be improved substantially by rejecting events that do not comply well with the single-gamma-interaction model, such as Compton-scattered events.« less

  11. Experimental validation of improved 3D SBP positioning algorithm in PET applications using UW Phase II Board

    NASA Astrophysics Data System (ADS)

    Jorge, L. S.; Bonifacio, D. A. B.; DeWitt, Don; Miyaoka, R. S.

    2016-12-01

    Continuous scintillator-based detectors have been considered as a competitive and cheaper approach than highly pixelated discrete crystal positron emission tomography (PET) detectors, despite the need for algorithms to estimate 3D gamma interaction position. In this work, we report on the implementation of a positioning algorithm to estimate the 3D interaction position in a continuous crystal PET detector using a Field Programmable Gate Array (FPGA). The evaluated method is the Statistics-Based Processing (SBP) technique that requires light response function and event position characterization. An algorithm has been implemented using the Verilog language and evaluated using a data acquisition board that contains an Altera Stratix III FPGA. The 3D SBP algorithm was previously successfully implemented on a Stratix II FPGA using simulated data and a different module design. In this work, improvements were made to the FPGA coding of the 3D positioning algorithm, reducing the total memory usage to around 34%. Further the algorithm was evaluated using experimental data from a continuous miniature crystal element (cMiCE) detector module. Using our new implementation, average FWHM (Full Width at Half Maximum) for the whole block is 1.71±0.01 mm, 1.70±0.01 mm and 1.632±0.005 mm for x, y and z directions, respectively. Using a pipelined architecture, the FPGA is able to process 245,000 events per second for interactions inside of the central area of the detector that represents 64% of the total block area. The weighted average of the event rate by regional area (corner, border and central regions) is about 198,000 events per second. This event rate is greater than the maximum expected coincidence rate for any given detector module in future PET systems using the cMiCE detector design.

  12. District Heating Systems Performance Analyses. Heat Energy Tariff

    NASA Astrophysics Data System (ADS)

    Ziemele, Jelena; Vigants, Girts; Vitolins, Valdis; Blumberga, Dagnija; Veidenbergs, Ivars

    2014-12-01

    The paper addresses an important element of the European energy sector: the evaluation of district heating (DH) system operations from the standpoint of increasing energy efficiency and increasing the use of renewable energy resources. This has been done by developing a new methodology for the evaluation of the heat tariff. The paper presents an algorithm of this methodology, which includes not only a data base and calculation equation systems, but also an integrated multi-criteria analysis module using MADM/MCDM (Multi-Attribute Decision Making / Multi-Criteria Decision Making) based on TOPSIS (Technique for Order Performance by Similarity to Ideal Solution). The results of the multi-criteria analysis are used to set the tariff benchmarks. The evaluation methodology has been tested for Latvian heat tariffs, and the obtained results show that only half of heating companies reach a benchmark value equal to 0.5 for the efficiency closeness to the ideal solution indicator. This means that the proposed evaluation methodology would not only allow companies to determine how they perform with regard to the proposed benchmark, but also to identify their need to restructure so that they may reach the level of a low-carbon business.

  13. A maximum power point tracking algorithm for buoy-rope-drum wave energy converters

    NASA Astrophysics Data System (ADS)

    Wang, J. Q.; Zhang, X. C.; Zhou, Y.; Cui, Z. C.; Zhu, L. S.

    2016-08-01

    The maximum power point tracking control is the key link to improve the energy conversion efficiency of wave energy converters (WEC). This paper presents a novel variable step size Perturb and Observe maximum power point tracking algorithm with a power classification standard for control of a buoy-rope-drum WEC. The algorithm and simulation model of the buoy-rope-drum WEC are presented in details, as well as simulation experiment results. The results show that the algorithm tracks the maximum power point of the WEC fast and accurately.

  14. Canceling the momentum in a phase-shifting algorithm to eliminate spatially uniform errors.

    PubMed

    Hibino, Kenichi; Kim, Yangjin

    2016-08-10

    In phase-shifting interferometry, phase modulation nonlinearity causes both spatially uniform and nonuniform errors in the measured phase. Conventional linear-detuning error-compensating algorithms only eliminate the spatially variable error component. The uniform error is proportional to the inertial momentum of the data-sampling weight of a phase-shifting algorithm. This paper proposes a design approach to cancel the momentum by using characteristic polynomials in the Z-transform space and shows that an arbitrary M-frame algorithm can be modified to a new (M+2)-frame algorithm that acquires new symmetry to eliminate the uniform error.

  15. ModuleMiner - improved computational detection of cis-regulatory modules: are there different modes of gene regulation in embryonic development and adult tissues?

    PubMed Central

    Van Loo, Peter; Aerts, Stein; Thienpont, Bernard; De Moor, Bart; Moreau, Yves; Marynen, Peter

    2008-01-01

    We present ModuleMiner, a novel algorithm for computationally detecting cis-regulatory modules (CRMs) in a set of co-expressed genes. ModuleMiner outperforms other methods for CRM detection on benchmark data, and successfully detects CRMs in tissue-specific microarray clusters and in embryonic development gene sets. Interestingly, CRM predictions for differentiated tissues exhibit strong enrichment close to the transcription start site, whereas CRM predictions for embryonic development gene sets are depleted in this region. PMID:18394174

  16. A real-time and closed-loop control algorithm for cascaded multilevel inverter based on artificial neural network.

    PubMed

    Wang, Libing; Mao, Chengxiong; Wang, Dan; Lu, Jiming; Zhang, Junfeng; Chen, Xun

    2014-01-01

    In order to control the cascaded H-bridges (CHB) converter with staircase modulation strategy in a real-time manner, a real-time and closed-loop control algorithm based on artificial neural network (ANN) for three-phase CHB converter is proposed in this paper. It costs little computation time and memory. It has two steps. In the first step, hierarchical particle swarm optimizer with time-varying acceleration coefficient (HPSO-TVAC) algorithm is employed to minimize the total harmonic distortion (THD) and generate the optimal switching angles offline. In the second step, part of optimal switching angles are used to train an ANN and the well-designed ANN can generate optimal switching angles in a real-time manner. Compared with previous real-time algorithm, the proposed algorithm is suitable for a wider range of modulation index and results in a smaller THD and a lower calculation time. Furthermore, the well-designed ANN is embedded into a closed-loop control algorithm for CHB converter with variable direct voltage (DC) sources. Simulation results demonstrate that the proposed closed-loop control algorithm is able to quickly stabilize load voltage and minimize the line current's THD (<5%) when subjecting the DC sources disturbance or load disturbance. In real design stage, a switching angle pulse generation scheme is proposed and experiment results verify its correctness.

  17. Traffic off-balancing algorithm for energy efficient networks

    NASA Astrophysics Data System (ADS)

    Kim, Junhyuk; Lee, Chankyun; Rhee, June-Koo Kevin

    2011-12-01

    Physical layer of high-end network system uses multiple interface arrays. Under the load-balancing perspective, light load can be distributed to multiple interfaces. However, it can cause energy inefficiency in terms of the number of poor utilization interfaces. To tackle this energy inefficiency, traffic off-balancing algorithm for traffic adaptive interface sleep/awake is investigated. As a reference model, 40G/100G Ethernet is investigated. We report that suggested algorithm can achieve energy efficiency while satisfying traffic transmission requirement.

  18. SU-F-J-72: A Clinical Usable Integrated Contouring Quality Evaluation Software for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, S; Dolly, S; Cai, B

    Purpose: To introduce the Auto Contour Evaluation (ACE) software, which is the clinical usable, user friendly, efficient and all-in-one toolbox for automatically identify common contouring errors in radiotherapy treatment planning using supervised machine learning techniques. Methods: ACE is developed with C# using Microsoft .Net framework and Windows Presentation Foundation (WPF) for elegant GUI design and smooth GUI transition animations through the integration of graphics engines and high dots per inch (DPI) settings on modern high resolution monitors. The industrial standard software design pattern, Model-View-ViewModel (MVVM) pattern, is chosen to be the major architecture of ACE for neat coding structure, deepmore » modularization, easy maintainability and seamless communication with other clinical software. ACE consists of 1) a patient data importing module integrated with clinical patient database server, 2) a 2D DICOM image and RT structure simultaneously displaying module, 3) a 3D RT structure visualization module using Visualization Toolkit or VTK library and 4) a contour evaluation module using supervised pattern recognition algorithms to detect contouring errors and display detection results. ACE relies on supervised learning algorithms to handle all image processing and data processing jobs. Implementations of related algorithms are powered by Accord.Net scientific computing library for better efficiency and effectiveness. Results: ACE can take patient’s CT images and RT structures from commercial treatment planning software via direct user input or from patients’ database. All functionalities including 2D and 3D image visualization and RT contours error detection have been demonstrated with real clinical patient cases. Conclusion: ACE implements supervised learning algorithms and combines image processing and graphical visualization modules for RT contours verification. ACE has great potential for automated radiotherapy contouring quality verification. Structured with MVVM pattern, it is highly maintainable and extensible, and support smooth connections with other clinical software tools.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lave, Matthew Samuel; Stein, Joshua S.; Burnham, Laurie

    A 9.6 kW test array of Prism bifacial modules and reference monofacial modules installed in February 2016 at the New Mexico Regional Test Center has produced six months of performance data. The data reveal that the Prism modules are out-performing the monofacial modules, with bifacial gains in energy over the six-month period ranging from 18% to 136%, depending on the orientation and ground albedo. These measured bifacial gains were found to be in good agreement with modeled bifacial gains using equations previously published by Prism. The most dramatic increase in performance was seen among the vertically tilted, west-facing modules, wheremore » the bifacial modules produced more than double the energy of monofacial modules and more energy than monofacial modules at any orientation. Because peak energy generation (mid-morning and mid-afternoon) for these bifacial modules may best match load on the electric grid, the west-facing orientation may be more economically desirable than traditional south-facing module orientations (which peak at solar noon).« less

  20. Selective Sensing of Gas Mixture via a Temperature Modulation Approach: New Strategy for Potentiometric Gas Sensor Obtaining Satisfactory Discriminating Features

    PubMed Central

    Li, Fu-an; Jin, Han; Wang, Jinxia; Zou, Jie; Jian, Jiawen

    2017-01-01

    A new strategy to discriminate four types of hazardous gases is proposed in this research. Through modulating the operating temperature and the processing response signal with a pattern recognition algorithm, a gas sensor consisting of a single sensing electrode, i.e., ZnO/In2O3 composite, is designed to differentiate NO2, NH3, C3H6, CO within the level of 50–400 ppm. Results indicate that with adding 15 wt.% ZnO to In2O3, the sensor fabricated at 900 °C shows optimal sensing characteristics in detecting all the studied gases. Moreover, with the aid of the principle component analysis (PCA) algorithm, the sensor operating in the temperature modulation mode demonstrates acceptable discrimination features. The satisfactory discrimination features disclose the future that it is possible to differentiate gas mixture efficiently through operating a single electrode sensor at temperature modulation mode. PMID:28287492

  1. Pressure modulation algorithm to separate cerebral hemodynamic signals from extracerebral artifacts

    PubMed Central

    Baker, Wesley B.; Parthasarathy, Ashwin B.; Ko, Tiffany S.; Busch, David R.; Abramson, Kenneth; Tzeng, Shih-Yu; Mesquita, Rickson C.; Durduran, Turgut; Greenberg, Joel H.; Kung, David K.; Yodh, Arjun G.

    2015-01-01

    Abstract. We introduce and validate a pressure measurement paradigm that reduces extracerebral contamination from superficial tissues in optical monitoring of cerebral blood flow with diffuse correlation spectroscopy (DCS). The scheme determines subject-specific contributions of extracerebral and cerebral tissues to the DCS signal by utilizing probe pressure modulation to induce variations in extracerebral blood flow. For analysis, the head is modeled as a two-layer medium and is probed with long and short source-detector separations. Then a combination of pressure modulation and a modified Beer-Lambert law for flow enables experimenters to linearly relate differential DCS signals to cerebral and extracerebral blood flow variation without a priori anatomical information. We demonstrate the algorithm’s ability to isolate cerebral blood flow during a finger-tapping task and during graded scalp ischemia in healthy adults. Finally, we adapt the pressure modulation algorithm to ameliorate extracerebral contamination in monitoring of cerebral blood oxygenation and blood volume by near-infrared spectroscopy. PMID:26301255

  2. Deconvolution of the vestibular evoked myogenic potential.

    PubMed

    Lütkenhöner, Bernd; Basel, Türker

    2012-02-07

    The vestibular evoked myogenic potential (VEMP) and the associated variance modulation can be understood by a convolution model. Two functions of time are incorporated into the model: the motor unit action potential (MUAP) of an average motor unit, and the temporal modulation of the MUAP rate of all contributing motor units, briefly called rate modulation. The latter is the function of interest, whereas the MUAP acts as a filter that distorts the information contained in the measured data. Here, it is shown how to recover the rate modulation by undoing the filtering using a deconvolution approach. The key aspects of our deconvolution algorithm are as follows: (1) the rate modulation is described in terms of just a few parameters; (2) the MUAP is calculated by Wiener deconvolution of the VEMP with the rate modulation; (3) the model parameters are optimized using a figure-of-merit function where the most important term quantifies the difference between measured and model-predicted variance modulation. The effectiveness of the algorithm is demonstrated with simulated data. An analysis of real data confirms the view that there are basically two components, which roughly correspond to the waves p13-n23 and n34-p44 of the VEMP. The rate modulation corresponding to the first, inhibitory component is much stronger than that corresponding to the second, excitatory component. But the latter is more extended so that the two modulations have almost the same equivalent rectangular duration. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. A Novel Energy-Aware Distributed Clustering Algorithm for Heterogeneous Wireless Sensor Networks in the Mobile Environment

    PubMed Central

    Gao, Ying; Wkram, Chris Hadri; Duan, Jiajie; Chou, Jarong

    2015-01-01

    In order to prolong the network lifetime, energy-efficient protocols adapted to the features of wireless sensor networks should be used. This paper explores in depth the nature of heterogeneous wireless sensor networks, and finally proposes an algorithm to address the problem of finding an effective pathway for heterogeneous clustering energy. The proposed algorithm implements cluster head selection according to the degree of energy attenuation during the network’s running and the degree of candidate nodes’ effective coverage on the whole network, so as to obtain an even energy consumption over the whole network for the situation with high degree of coverage. Simulation results show that the proposed clustering protocol has better adaptability to heterogeneous environments than existing clustering algorithms in prolonging the network lifetime. PMID:26690440

  4. A Coulomb collision algorithm for weighted particle simulations

    NASA Technical Reports Server (NTRS)

    Miller, Ronald H.; Combi, Michael R.

    1994-01-01

    A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.

  5. Man and Energy, Module C. Fourth Grade. Pilot Form.

    ERIC Educational Resources Information Center

    Pasco County Schools, Dade City, FL.

    This booklet is one of a set of learning modules on energy for use by students and teachers in the fourth grade. This module investigates solar energy, ecology, and fossil fuels. Included are laboratory activities and values exercises. (BT)

  6. Calculation of the Respiratory Modulation of the Photoplethysmogram (DPOP) Incorporating a Correction for Low Perfusion

    PubMed Central

    Addison, Paul S.; Wang, Rui; McGonigle, Scott J.; Bergese, Sergio D.

    2014-01-01

    DPOP quantifies respiratory modulations in the photoplethysmogram. It has been proposed as a noninvasive surrogate for pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. The correlation between DPOP and PPV may degrade due to low perfusion effects. We implemented an automated DPOP algorithm with an optional correction for low perfusion. These two algorithm variants (DPOPa and DPOPb) were tested on data from 20 mechanically ventilated OR patients split into a benign “stable region” subset and a whole record “global set.” Strong correlation was found between DPOP and PPV for both algorithms when applied to the stable data set: R = 0.83/0.85 for DPOPa/DPOPb. However, a marked improvement was found when applying the low perfusion correction to the global data set: R = 0.47/0.73 for DPOPa/DPOPb. Sensitivities, Specificities, and AUCs were 0.86, 0.70, and 0.88 for DPOPa/stable region; 0.89, 0.82, and 0.92 for DPOPb/stable region; 0.81, 0.61, and 0.73 for DPOPa/global region; 0.83, 0.76, and 0.86 for DPOPb/global region. An improvement was found in all results across both data sets when using the DPOPb algorithm. Further, DPOPb showed marked improvements, both in terms of its values, and correlation with PPV, for signals exhibiting low percent modulations. PMID:25177348

  7. Super-resolution processing for multi-functional LPI waveforms

    NASA Astrophysics Data System (ADS)

    Li, Zhengzheng; Zhang, Yan; Wang, Shang; Cai, Jingxiao

    2014-05-01

    Super-resolution (SR) is a radar processing technique closely related to the pulse compression (or correlation receiver). There are many super-resolution algorithms developed for the improved range resolution and reduced sidelobe contaminations. Traditionally, the waveforms used for the SR have been either phase-coding (such as LKP3 code, Barker code) or the frequency modulation (chirp, or nonlinear frequency modulation). There are, however, an important class of waveforms which are either random in nature (such as random noise waveform), or randomly modulated for multiple function operations (such as the ADS-B radar signals in [1]). These waveforms have the advantages of low-probability-of-intercept (LPI). If the existing SR techniques can be applied to these waveforms, there will be much more flexibility for using these waveforms in actual sensing missions. Also, SR usually has great advantage that the final output (as estimation of ground truth) is largely independent of the waveform. Such benefits are attractive to many important primary radar applications. In this paper the general introduction of the SR algorithms are provided first, and some implementation considerations are discussed. The selected algorithms are applied to the typical LPI waveforms, and the results are discussed. It is observed that SR algorithms can be reliably used for LPI waveforms, on the other hand, practical considerations should be kept in mind in order to obtain the optimal estimation results.

  8. [Design and implementation of real-time continuous glucose monitoring instrument].

    PubMed

    Huang, Yonghong; Liu, Hongying; Tian, Senfu; Jia, Ziru; Wang, Zi; Pi, Xitian

    2017-12-01

    Real-time continuous glucose monitoring can help diabetics to control blood sugar levels within the normal range. However, in the process of practical monitoring, the output of real-time continuous glucose monitoring system is susceptible to glucose sensor and environment noise, which will influence the measurement accuracy of the system. Aiming at this problem, a dual-calibration algorithm for the moving-window double-layer filtering algorithm combined with real-time self-compensation calibration algorithm is proposed in this paper, which can realize the signal drift compensation for current data. And a real-time continuous glucose monitoring instrument based on this study was designed. This real-time continuous glucose monitoring instrument consisted of an adjustable excitation voltage module, a current-voltage converter module, a microprocessor and a wireless transceiver module. For portability, the size of the device was only 40 mm × 30 mm × 5 mm and its weight was only 30 g. In addition, a communication command code algorithm was designed to ensure the security and integrity of data transmission in this study. Results of experiments in vitro showed that current detection of the device worked effectively. A 5-hour monitoring of blood glucose level in vivo showed that the device could continuously monitor blood glucose in real time. The relative error of monitoring results of the designed device ranged from 2.22% to 7.17% when comparing to a portable blood meter.

  9. A Smart Power Electronic Multiconverter for the Residential Sector.

    PubMed

    Guerrero-Martinez, Miguel Angel; Milanes-Montero, Maria Isabel; Barrero-Gonzalez, Fermin; Miñambres-Marcos, Victor Manuel; Romero-Cadaval, Enrique; Gonzalez-Romera, Eva

    2017-05-26

    The future of the grid includes distributed generation and smart grid technologies. Demand Side Management (DSM) systems will also be essential to achieve a high level of reliability and robustness in power systems. To do that, expanding the Advanced Metering Infrastructure (AMI) and Energy Management Systems (EMS) are necessary. The trend direction is towards the creation of energy resource hubs, such as the smart community concept. This paper presents a smart multiconverter system for residential/housing sector with a Hybrid Energy Storage System (HESS) consisting of supercapacitor and battery, and with local photovoltaic (PV) energy source integration. The device works as a distributed energy unit located in each house of the community, receiving active power set-points provided by a smart community EMS. This central EMS is responsible for managing the active energy flows between the electricity grid, renewable energy sources, storage equipment and loads existing in the community. The proposed multiconverter is responsible for complying with the reference active power set-points with proper power quality; guaranteeing that the local PV modules operate with a Maximum Power Point Tracking (MPPT) algorithm; and extending the lifetime of the battery thanks to a cooperative operation of the HESS. A simulation model has been developed in order to show the detailed operation of the system. Finally, a prototype of the multiconverter platform has been implemented and some experimental tests have been carried out to validate it.

  10. A Smart Power Electronic Multiconverter for the Residential Sector

    PubMed Central

    Guerrero-Martinez, Miguel Angel; Milanes-Montero, Maria Isabel; Barrero-Gonzalez, Fermin; Miñambres-Marcos, Victor Manuel; Romero-Cadaval, Enrique; Gonzalez-Romera, Eva

    2017-01-01

    The future of the grid includes distributed generation and smart grid technologies. Demand Side Management (DSM) systems will also be essential to achieve a high level of reliability and robustness in power systems. To do that, expanding the Advanced Metering Infrastructure (AMI) and Energy Management Systems (EMS) are necessary. The trend direction is towards the creation of energy resource hubs, such as the smart community concept. This paper presents a smart multiconverter system for residential/housing sector with a Hybrid Energy Storage System (HESS) consisting of supercapacitor and battery, and with local photovoltaic (PV) energy source integration. The device works as a distributed energy unit located in each house of the community, receiving active power set-points provided by a smart community EMS. This central EMS is responsible for managing the active energy flows between the electricity grid, renewable energy sources, storage equipment and loads existing in the community. The proposed multiconverter is responsible for complying with the reference active power set-points with proper power quality; guaranteeing that the local PV modules operate with a Maximum Power Point Tracking (MPPT) algorithm; and extending the lifetime of the battery thanks to a cooperative operation of the HESS. A simulation model has been developed in order to show the detailed operation of the system. Finally, a prototype of the multiconverter platform has been implemented and some experimental tests have been carried out to validate it. PMID:28587131

  11. Energy-Efficient Routing and Spectrum Assignment Algorithm with Physical-Layer Impairments Constraint in Flexible Optical Networks

    NASA Astrophysics Data System (ADS)

    Zhao, Jijun; Zhang, Nawa; Ren, Danping; Hu, Jinhua

    2017-12-01

    The recently proposed flexible optical network can provide more efficient accommodation of multiple data rates than the current wavelength-routed optical networks. Meanwhile, the energy efficiency has also been a hot topic because of the serious energy consumption problem. In this paper, the energy efficiency problem of flexible optical networks with physical-layer impairments constraint is studied. We propose a combined impairment-aware and energy-efficient routing and spectrum assignment (RSA) algorithm based on the link availability, in which the impact of power consumption minimization on signal quality is considered. By applying the proposed algorithm, the connection requests are established on a subset of network topology, reducing the number of transitions from sleep to active state. The simulation results demonstrate that our proposed algorithm can improve the energy efficiency and spectrum resources utilization with the acceptable blocking probability and average delay.

  12. Polarizable Molecular Dynamics in a Polarizable Continuum Solvent

    PubMed Central

    Lipparini, Filippo; Lagardère, Louis; Raynaud, Christophe; Stamm, Benjamin; Cancès, Eric; Mennucci, Benedetta; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip

    2015-01-01

    We present for the first time scalable polarizable molecular dynamics (MD) simulations within a polarizable continuum solvent with molecular shape cavities and exact solution of the mutual polarization. The key ingredients are a very efficient algorithm for solving the equations associated with the polarizable continuum, in particular, the domain decomposition Conductor-like Screening Model (ddCOSMO), a rigorous coupling of the continuum with the polarizable force field achieved through a robust variational formulation and an effective strategy to solve the coupled equations. The coupling of ddCOSMO with non variational force fields, including AMOEBA, is also addressed. The MD simulations are feasible, for real life systems, on standard cluster nodes; a scalable parallel implementation allows for further speed up in the context of a newly developed module in Tinker, named Tinker-HP. NVE simulations are stable and long term energy conservation can be achieved. This paper is focused on the methodological developments, on the analysis of the algorithm and on the stability of the simulations; a proof-of-concept application is also presented to attest the possibilities of this newly developed technique. PMID:26516318

  13. Heating Structures Derived from Satellite

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Adler, R.; Haddad, Z.; Hou, A.; Kakar, R.; Krishnamurti, T. N.; Kummerow, C.; Lang, S.; Meneghini, R.; Olson, W.

    2004-01-01

    Rainfall is a key link in the hydrologic cycle and is a primary heat source for the atmosphere. The vertical distribution of latent-heat release, which is accompanied by rainfall, modulates the large-scale circulations of the tropics and in turn can impact midlatitude weather. This latent heat release is a consequence of phase changes between vapor, liquid, and solid water. The Tropical Rainfall Measuring Mission (TRMM), a joint U.S./Japan space project, was launched in November 1997. It provides an accurate measurement of rainfall over the global tropics which can be used to estimate the four-dimensional structure of latent heating over the global tropics. The distributions of rainfall and inferred heating can be used to advance our understanding of the global energy and water cycle. This paper describes several different algorithms for estimating latent heating using TRMM observations. The strengths and weaknesses of each algorithm as well as the heating products are also discussed. The validation of heating products will be exhibited. Finally, the application of this heating information to global circulation and climate models is presented.

  14. An Optimal CDS Construction Algorithm with Activity Scheduling in Ad Hoc Networks

    PubMed Central

    Penumalli, Chakradhar; Palanichamy, Yogesh

    2015-01-01

    A new energy efficient optimal Connected Dominating Set (CDS) algorithm with activity scheduling for mobile ad hoc networks (MANETs) is proposed. This algorithm achieves energy efficiency by minimizing the Broadcast Storm Problem [BSP] and at the same time considering the node's remaining energy. The Connected Dominating Set is widely used as a virtual backbone or spine in mobile ad hoc networks [MANETs] or Wireless Sensor Networks [WSN]. The CDS of a graph representing a network has a significant impact on an efficient design of routing protocol in wireless networks. Here the CDS is a distributed algorithm with activity scheduling based on unit disk graph [UDG]. The node's mobility and residual energy (RE) are considered as parameters in the construction of stable optimal energy efficient CDS. The performance is evaluated at various node densities, various transmission ranges, and mobility rates. The theoretical analysis and simulation results of this algorithm are also presented which yield better results. PMID:26221627

  15. An evaluation and implementation of rule-based Home Energy Management System using the Rete algorithm.

    PubMed

    Kawakami, Tomoya; Fujita, Naotaka; Yoshihisa, Tomoki; Tsukamoto, Masahiko

    2014-01-01

    In recent years, sensors become popular and Home Energy Management System (HEMS) takes an important role in saving energy without decrease in QoL (Quality of Life). Currently, many rule-based HEMSs have been proposed and almost all of them assume "IF-THEN" rules. The Rete algorithm is a typical pattern matching algorithm for IF-THEN rules. Currently, we have proposed a rule-based Home Energy Management System (HEMS) using the Rete algorithm. In the proposed system, rules for managing energy are processed by smart taps in network, and the loads for processing rules and collecting data are distributed to smart taps. In addition, the number of processes and collecting data are reduced by processing rules based on the Rete algorithm. In this paper, we evaluated the proposed system by simulation. In the simulation environment, rules are processed by a smart tap that relates to the action part of each rule. In addition, we implemented the proposed system as HEMS using smart taps.

  16. PPM-based System for Guided Waves Communication Through Corrosion Resistant Multi-wire Cables

    NASA Astrophysics Data System (ADS)

    Trane, G.; Mijarez, R.; Guevara, R.; Pascacio, D.

    Novel wireless communication channels are a necessity in applications surrounded by harsh environments, for instance down-hole oil reservoirs. Traditional radio frequency (RF) communication schemes are not capable of transmitting signals through metal enclosures surrounded by corrosive gases and liquids. As an alternative to RF, a pulse position modulation (PPM) guided waves communication system has been developed and evaluated using a corrosion resistant 4H18 multi-wire cable, commonly used to descend electronic gauges in down-hole oil applications, as the communication medium. The system consists of a transmitter and a receiver that utilizes a PZT crystal, for electrical/mechanical coupling, attached to each extreme of the multi-wire cable. The modulator is based on a microcontroller, which transmits60 kHz guided wave pulses, and the demodulator is based on a commercial digital signal processor (DSP) module that performs real time DSP algorithms. Experimental results are presented, which were obtained using a 1m corrosion resistant 4H18multi-wire cable, commonly used with downhole electronic gauges in the oil sector. Although there was significant dispersion and multiple mode excitations of the transmitted guided wave energy pulses, the results show that data rates on the order of 500 bits per second are readily available employing PPM and simple communications techniques.

  17. Developpement d'une commande pour une hydrolienne de riviere et optimisation =

    NASA Astrophysics Data System (ADS)

    Tetrault, Philippe

    Suivant le developpement des energies renouvelables, la presente etude se veut une base theorique quant aux principes fondamentaux necessaires au bon fonctionnement et a l'implementation d'une hydrolienne de riviere. La problematique derriere ce nouveau type d'appareil est d'abord presentee. La machine electrique utilisee dans l'application, c'est-a-dire la machine synchrone a aimants permanents, est etudiee : ses equations dynamiques mecaniques et electriques sont developpees, introduisant en meme temps le concept de referentiel tournant. Le fonctionnement de l'onduleur utilise, soit un montage en pont complet a deux niveaux a semi-conducteurs, est explique et mit en equation pour permettre de comprendre les strategies de modulation disponibles. Un bref historique de ces strategies est fait avant de mettre l'emphase sur la modulation vectorielle qui sera celle utilisee pour l'application en cours. Les differents modules sont assembles dans une simulation Matlab pour confirmer leur bon fonctionnement et comparer les resultats de la simulation avec les calculs theoriques. Differents algorithmes permettant de traquer et maintenir un point de fonctionnement optimal sont presentes. Le comportement de la riviere est etudie afin d'evaluer l'ampleur des perturbations que le systeme devra gerer. Finalement, une nouvelle approche est presentee et comparee a une strategie plus conservatrice a l'aide d'un autre modele de simulation Matlab.

  18. NASA Tech Briefs, July 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Topics covered include: Miniature Intelligent Sensor Module; "Smart" Sensor Module; Portable Apparatus for Electrochemical Sensing of Ethylene; Increasing Linear Dynamic Range of a CMOS Image Sensor; Flight Qualified Micro Sun Sensor; Norbornene-Based Polymer Electrolytes for Lithium Cells; Making Single-Source Precursors of Ternary Semiconductors; Water-Free Proton-Conducting Membranes for Fuel Cells; Mo/Ti Diffusion Bonding for Making Thermoelectric Devices; Photodetectors on Coronagraph Mask for Pointing Control; High-Energy-Density, Low-Temperature Li/CFx Primary Cells; G4-FETs as Universal and Programmable Logic Gates; Fabrication of Buried Nanochannels From Nanowire Patterns; Diamond Smoothing Tools; Infrared Imaging System for Studying Brain Function; Rarefying Spectra of Whispering-Gallery-Mode Resonators; Large-Area Permanent-Magnet ECR Plasma Source; Slot-Antenna/Permanent-Magnet Device for Generating Plasma; Fiber-Optic Strain Gauge With High Resolution And Update Rate; Broadband Achromatic Telecentric Lens; Temperature-Corrected Model of Turbulence in Hot Jet Flows; Enhanced Elliptic Grid Generation; Automated Knowledge Discovery From Simulators; Electro-Optical Modulator Bias Control Using Bipolar Pulses; Generative Representations for Automated Design of Robots; Mars-Approach Navigation Using In Situ Orbiters; Efficient Optimization of Low-Thrust Spacecraft Trajectories; Cylindrical Asymmetrical Capacitors for Use in Outer Space; Protecting Against Faults in JPL Spacecraft; Algorithm Optimally Allocates Actuation of a Spacecraft; and Radar Interferometer for Topographic Mapping of Glaciers and Ice Sheets.

  19. Cache and energy efficient algorithms for Nussinov's RNA Folding.

    PubMed

    Zhao, Chunchun; Sahni, Sartaj

    2017-12-06

    An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.

  20. DiME: A Scalable Disease Module Identification Algorithm with Application to Glioma Progression

    PubMed Central

    Liu, Yunpeng; Tennant, Daniel A.; Zhu, Zexuan; Heath, John K.; Yao, Xin; He, Shan

    2014-01-01

    Disease module is a group of molecular components that interact intensively in the disease specific biological network. Since the connectivity and activity of disease modules may shed light on the molecular mechanisms of pathogenesis and disease progression, their identification becomes one of the most important challenges in network medicine, an emerging paradigm to study complex human disease. This paper proposes a novel algorithm, DiME (Disease Module Extraction), to identify putative disease modules from biological networks. We have developed novel heuristics to optimise Community Extraction, a module criterion originally proposed for social network analysis, to extract topological core modules from biological networks as putative disease modules. In addition, we have incorporated a statistical significance measure, B-score, to evaluate the quality of extracted modules. As an application to complex diseases, we have employed DiME to investigate the molecular mechanisms that underpin the progression of glioma, the most common type of brain tumour. We have built low (grade II) - and high (GBM) - grade glioma co-expression networks from three independent datasets and then applied DiME to extract potential disease modules from both networks for comparison. Examination of the interconnectivity of the identified modules have revealed changes in topology and module activity (expression) between low- and high- grade tumours, which are characteristic of the major shifts in the constitution and physiology of tumour cells during glioma progression. Our results suggest that transcription factors E2F4, AR and ETS1 are potential key regulators in tumour progression. Our DiME compiled software, R/C++ source code, sample data and a tutorial are available at http://www.cs.bham.ac.uk/~szh/DiME. PMID:24523864

  1. Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model

    PubMed Central

    Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal

    2016-01-01

    In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF), which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application), which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy. PMID:27589769

  2. Recent Developments and Applications of Radiation/Detection Technology in Tsinghua University

    NASA Astrophysics Data System (ADS)

    Kang, Ke-Jun

    2010-03-01

    Nuclear technology applications have been very important research fields in Tsinghua University (THU) for more than 50 years. This paper describes two major directions and related projects running in THU concerning nuclear technology applications for radiation imaging and nuclear technology applications for astrophysics. Radiation imaging is a significant application of nuclear technology for all kinds of real world needs including security inspections, anti-smuggling operations, and medicine. The current improved imaging systems give much higher quality radiation images. THU has produced accelerating tubes for both industrial and medical accelerators with energy levels ranging from 2.5˜20Mev. Detectors have been produced for medical and industrial imaging as well as for high energy physics experiments such as the MRPC with fast time and position resolutions. DR and CT systems for radiation imaging systems have been continuously improved with new system designs and improved algorithms for image reconstruction and processing. Two important new key initiatives are the dual-energy radiography and dual-energy CT systems. Dual-energy CT imaging improves material discrimination by providing both the electron density and the atomic number distribution of scanned objects. Finally, this paper also introduces recent developments related to the hard X-ray modulation telescope (HXMT) provided by THU.

  3. Trimming algorithm of frequency modulation for CIAE-230 MeV proton superconducting synchrocyclotron model cavity

    NASA Astrophysics Data System (ADS)

    Li, Pengzhan; Zhang, Tianjue; Ji, Bin; Hou, Shigang; Guo, Juanjuan; Yin, Meng; Xing, Jiansheng; Lv, Yinlong; Guan, Fengping; Lin, Jun

    2017-01-01

    A new project, the 230 MeV proton superconducting synchrocyclotron for cancer therapy, was proposed at CIAE in 2013. A model cavity is designed to verify the frequency modulation trimming algorithm featuring a half-wave structure and eight sets of rotating blades for 1 kHz frequency modulation. Based on the electromagnetic (EM) field distribution analysis of the model cavity, the variable capacitor works as a function of time and the frequency can be written in Maclaurin series. Curve fitting is applied for theoretical frequency and original simulation frequency. The second-order fitting excels at the approximation given its minimum variance. Constant equivalent inductance is considered as an important condition in the calculation. The equivalent parameters of theoretical frequency can be achieved through this conversion. Then the trimming formula for rotor blade outer radius is found by discretization in time domain. Simulation verification has been performed and the results show that the calculation radius with minus 0.012 m yields an acceptable result. The trimming amendment in the time range of 0.328-0.4 ms helps to reduce the frequency error to 0.69% in Simulation C with an increment of 0.075 mm/0.001 ms, which is half of the error in Simulation A (constant radius in 0.328-0.4 ms). The verification confirms the feasibility of the trimming algorithm for synchrocyclotron frequency modulation.

  4. Computer Generated Holography with Intensity-Graded Patterns

    PubMed Central

    Conti, Rossella; Assayag, Osnath; de Sars, Vincent; Guillon, Marc; Emiliani, Valentina

    2016-01-01

    Computer Generated Holography achieves patterned illumination at the sample plane through phase modulation of the laser beam at the objective back aperture. This is obtained by using liquid crystal-based spatial light modulators (LC-SLMs), which modulate the spatial phase of the incident laser beam. A variety of algorithms is employed to calculate the phase modulation masks addressed to the LC-SLM. These algorithms range from simple gratings-and-lenses to generate multiple diffraction-limited spots, to iterative Fourier-transform algorithms capable of generating arbitrary illumination shapes perfectly tailored on the base of the target contour. Applications for holographic light patterning include multi-trap optical tweezers, patterned voltage imaging and optical control of neuronal excitation using uncaging or optogenetics. These past implementations of computer generated holography used binary input profile to generate binary light distribution at the sample plane. Here we demonstrate that using graded input sources, enables generating intensity graded light patterns and extend the range of application of holographic light illumination. At first, we use intensity-graded holograms to compensate for LC-SLM position dependent diffraction efficiency or sample fluorescence inhomogeneity. Finally we show that intensity-graded holography can be used to equalize photo evoked currents from cells expressing different levels of chanelrhodopsin2 (ChR2), one of the most commonly used optogenetics light gated channels, taking into account the non-linear dependence of channel opening on incident light. PMID:27799896

  5. Identification of functional modules using network topology and high-throughput data.

    PubMed

    Ulitsky, Igor; Shamir, Ron

    2007-01-26

    With the advent of systems biology, biological knowledge is often represented today by networks. These include regulatory and metabolic networks, protein-protein interaction networks, and many others. At the same time, high-throughput genomics and proteomics techniques generate very large data sets, which require sophisticated computational analysis. Usually, separate and different analysis methodologies are applied to each of the two data types. An integrated investigation of network and high-throughput information together can improve the quality of the analysis by accounting simultaneously for topological network properties alongside intrinsic features of the high-throughput data. We describe a novel algorithmic framework for this challenge. We first transform the high-throughput data into similarity values, (e.g., by computing pairwise similarity of gene expression patterns from microarray data). Then, given a network of genes or proteins and similarity values between some of them, we seek connected sub-networks (or modules) that manifest high similarity. We develop algorithms for this problem and evaluate their performance on the osmotic shock response network in S. cerevisiae and on the human cell cycle network. We demonstrate that focused, biologically meaningful and relevant functional modules are obtained. In comparison with extant algorithms, our approach has higher sensitivity and higher specificity. We have demonstrated that our method can accurately identify functional modules. Hence, it carries the promise to be highly useful in analysis of high throughput data.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C.; Yu, G.; Wang, K.

    The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecturemore » achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)« less

  7. Numerical models analysis of energy conversion process in air-breathing laser propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Yanji; Song Junling; Cui Cunyan

    Energy source was considered as a key essential in this paper to describe energy conversion process in air-breathing laser propulsion. Some secondary factors were ignored when three independent modules, ray transmission module, energy source term module and fluid dynamic module, were established by simultaneous laser radiation transportation equation and fluid mechanics equation. The incidence laser beam was simulated based on ray tracing method. The calculated results were in good agreement with those of theoretical analysis and experiments.

  8. Optimal field-splitting algorithm in intensity-modulated radiotherapy: Evaluations using head-and-neck and female pelvic IMRT cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, Xin; Kim, Yusung, E-mail: yusung-kim@uiowa.edu; Bayouth, John E.

    2013-04-01

    To develop an optimal field-splitting algorithm of minimal complexity and verify the algorithm using head-and-neck (H and N) and female pelvic intensity-modulated radiotherapy (IMRT) cases. An optimal field-splitting algorithm was developed in which a large intensity map (IM) was split into multiple sub-IMs (≥2). The algorithm reduced the total complexity by minimizing the monitor units (MU) delivered and segment number of each sub-IM. The algorithm was verified through comparison studies with the algorithm as used in a commercial treatment planning system. Seven IMRT, H and N, and female pelvic cancer cases (54 IMs) were analyzed by MU, segment numbers, andmore » dose distributions. The optimal field-splitting algorithm was found to reduce both total MU and the total number of segments. We found on average a 7.9 ± 11.8% and 9.6 ± 18.2% reduction in MU and segment numbers for H and N IMRT cases with an 11.9 ± 17.4% and 11.1 ± 13.7% reduction for female pelvic cases. The overall percent (absolute) reduction in the numbers of MU and segments were found to be on average −9.7 ± 14.6% (−15 ± 25 MU) and −10.3 ± 16.3% (−3 ± 5), respectively. In addition, all dose distributions from the optimal field-splitting method showed improved dose distributions. The optimal field-splitting algorithm shows considerable improvements in both total MU and total segment number. The algorithm is expected to be beneficial for the radiotherapy treatment of large-field IMRT.« less

  9. Longitudinal density modulation and energy conversion in intense beams.

    PubMed

    Harris, J R; Neumann, J G; Tian, K; O'Shea, P G

    2007-08-01

    Density modulation of charged particle beams may occur as a consequence of deliberate action, or may occur inadvertently because of imperfections in the particle source or acceleration method. In the case of intense beams, where space charge and external focusing govern the beam dynamics, density modulation may, under some circumstances, be converted to velocity modulation, with a corresponding conversion of potential energy to kinetic energy. Whether this will occur depends on the properties of the beam and the initial modulation. This paper describes the evolution of discrete and continuous density modulations on intense beams and discusses three recent experiments related to the dynamics of density-modulated electron beams.

  10. Some practical universal noiseless coding techniques, part 3, module PSl14,K+

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.

    1991-01-01

    The algorithmic definitions, performance characterizations, and application notes for a high-performance adaptive noiseless coding module are provided. Subsets of these algorithms are currently under development in custom very large scale integration (VLSI) at three NASA centers. The generality of coding algorithms recently reported is extended. The module incorporates a powerful adaptive noiseless coder for Standard Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers, where smaller integers are more likely than the larger ones). Coders can be specified to provide performance close to the data entropy over any desired dynamic range (of entropy) above 0.75 bit/sample. This is accomplished by adaptively choosing the best of many efficient variable-length coding options to use on each short block of data (e.g., 16 samples) All code options used for entropies above 1.5 bits/sample are 'Huffman Equivalent', but they require no table lookups to implement. The coding can be performed directly on data that have been preprocessed to exhibit the characteristics of a standard source. Alternatively, a built-in predictive preprocessor can be used where applicable. This built-in preprocessor includes the familiar 1-D predictor followed by a function that maps the prediction error sequences into the desired standard form. Additionally, an external prediction can be substituted if desired. A broad range of issues dealing with the interface between the coding module and the data systems it might serve are further addressed. These issues include: multidimensional prediction, archival access, sensor noise, rate control, code rate improvements outside the module, and the optimality of certain internal code options.

  11. 10 CFR 431.226 - Energy conservation standards and their effective dates.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Energy conservation standards and their effective dates. 431.226 Section 431.226 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Traffic Signal Modules and Pedestrian Modules Energy...

  12. 10 CFR 431.226 - Energy conservation standards and their effective dates.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Energy conservation standards and their effective dates. 431.226 Section 431.226 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Traffic Signal Modules and Pedestrian Modules Energy...

  13. 10 CFR 431.226 - Energy conservation standards and their effective dates.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Energy conservation standards and their effective dates. 431.226 Section 431.226 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Traffic Signal Modules and Pedestrian Modules Energy...

  14. 10 CFR 431.226 - Energy conservation standards and their effective dates.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Energy conservation standards and their effective dates. 431.226 Section 431.226 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Traffic Signal Modules and Pedestrian Modules Energy...

  15. 10 CFR 431.226 - Energy conservation standards and their effective dates.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Energy conservation standards and their effective dates. 431.226 Section 431.226 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Traffic Signal Modules and Pedestrian Modules Energy...

  16. Photovoltaic pumping system - Comparative study analysis between direct and indirect coupling mode

    NASA Astrophysics Data System (ADS)

    Harrag, Abdelghani; Titraoui, Abdessalem; Bahri, Hamza; Messalti, Sabir

    2017-02-01

    In this paper, P&O algorithm is used in order to improve the performance of photovoltaic water pumping system in both dynamic and static response. The efficiency of the proposed algorithm has been studied successfully using a DC motor-pump powered using controller by thirty six PV modules via DC-DC boost converter derived by a P&O MPPT algorithm. Comparative study results between the direct and indirect modes coupling confirm that the proposed algorithm can effectively improve simultaneously: accuracy, rapidity, ripple and overshoot.

  17. Discharging a DC bus capacitor of an electrical converter system

    DOEpatents

    Kajouke, Lateef A; Perisic, Milun; Ransom, Ray M

    2014-10-14

    A system and method of discharging a bus capacitor of a bidirectional matrix converter of a vehicle are presented here. The method begins by electrically shorting the AC interface of the converter after an AC energy source is disconnected from the AC interface. The method continues by arranging a plurality of switching elements of a second energy conversion module into a discharge configuration to establish an electrical current path from a first terminal of an isolation module, through an inductive element, and to a second terminal of the isolation module. The method also modulates a plurality of switching elements of a first energy conversion module, while maintaining the discharge configuration of the second energy conversion module, to at least partially discharge a DC bus capacitor.

  18. An Effective Hybrid Routing Algorithm in WSN: Ant Colony Optimization in combination with Hop Count Minimization.

    PubMed

    Jiang, Ailian; Zheng, Lihong

    2018-03-29

    Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime.

  19. An Effective Hybrid Routing Algorithm in WSN: Ant Colony Optimization in combination with Hop Count Minimization

    PubMed Central

    2018-01-01

    Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime. PMID:29596336

  20. Design and simulation of control algorithms for stored energy and plasma current in non-inductive scenarios on NSTX-U

    NASA Astrophysics Data System (ADS)

    Boyer, Mark; Andre, Robert; Gates, David; Gerhardt, Stefan; Menard, Jonathan; Poli, Francesca

    2015-11-01

    One of the major goals of NSTX-U is to demonstrate non-inductive operation. To facilitate this and other program goals, the center stack has been upgraded and a second neutral beam line has been added with three sources aimed more tangentially to provide higher current drive efficiency and the ability to shape the current drive profile. While non-inductive start-up and ramp-up scenarios are being developed, initial non-inductive studies will likely rely on clamping the Ohmic coil current after the plasma current has been established inductively. In this work the ability to maintain control of stored energy and plasma current once the Ohmic coil has been clamped is explored. The six neutral beam sources and the mid-plane outer gap of the plasma are considered as actuators. System identification is done using TRANSP simulations in which the actuators are modulated around a reference shot. The resulting reduced model is used to design an optimal control law with anti-windup and a recently developed framework for closed loop simulations in TRANSP is used to test the control. Limitations due to actuator saturation are assessed and robustness to beam modulation, changes in the plasma density and confinement, and changes in density and temperature profile shapes are studied. Supported by US DOE contract DE-AC02-09CH11466.

  1. Nature's Energy, Module B. Fourth Grade. Pilot Form.

    ERIC Educational Resources Information Center

    Pasco County Schools, Dade City, FL.

    This booklet is one of a set of learning modules on energy for use by students and teachers in the fourth grade. This module examines man's use of fossil fuels, electricity production, and other energy sources. Included are laboratory activities and values exercises. (BT)

  2. Active optimal control strategies for increasing the efficiency of photovoltaic cells

    NASA Astrophysics Data System (ADS)

    Aljoaba, Sharif Zidan Ahmad

    Energy consumption has increased drastically during the last century. Currently, the worldwide energy consumption is about 17.4 TW and is predicted to reach 25 TW by 2035. Solar energy has emerged as one of the potential renewable energy sources. Since its first physical recognition in 1887 by Adams and Day till nowadays, research in solar energy is continuously developing. This has lead to many achievements and milestones that introduced it as one of the most reliable and sustainable energy sources. Recently, the International Energy Agency declared that solar energy is predicted to be one of the major electricity production energy sources by 2035. Enhancing the efficiency and lifecycle of photovoltaic (PV) modules leads to significant cost reduction. Reducing the temperature of the PV module improves its efficiency and enhances its lifecycle. To better understand the PV module performance, it is important to study the interaction between the output power and the temperature. A model that is capable of predicting the PV module temperature and its effects on the output power considering the individual contribution of the solar spectrum wavelengths significantly advances the PV module edsigns toward higher efficiency. In this work, a thermoelectrical model is developed to predict the effects of the solar spectrum wavelengths on the PV module performance. The model is characterized and validated under real meteorological conditions where experimental temperature and output power of the PV module measurements are shown to agree with the predicted results. The model is used to validate the concept of active optical filtering. Since this model is wavelength-based, it is used to design an active optical filter for PV applications. Applying this filter to the PV module is expected to increase the output power of the module by filtering the spectrum wavelengths. The active filter performance is optimized, where different cutoff wavelengths are used to maximize the module output power. It is predicted that if the optimized active optical filter is applied to the PV module, the module efficiency is predicted to increase by about 1%. Different technologies are considered for physical implementation of the active optical filter.

  3. Energy-efficient algorithm for classification of states of wireless sensor network using machine learning methods

    NASA Astrophysics Data System (ADS)

    Yuldashev, M. N.; Vlasov, A. I.; Novikov, A. N.

    2018-05-01

    This paper focuses on the development of an energy-efficient algorithm for classification of states of a wireless sensor network using machine learning methods. The proposed algorithm reduces energy consumption by: 1) elimination of monitoring of parameters that do not affect the state of the sensor network, 2) reduction of communication sessions over the network (the data are transmitted only if their values can affect the state of the sensor network). The studies of the proposed algorithm have shown that at classification accuracy close to 100%, the number of communication sessions can be reduced by 80%.

  4. Scaling vectors of attoJoule per bit modulators

    NASA Astrophysics Data System (ADS)

    Sorger, Volker J.; Amin, Rubab; Khurgin, Jacob B.; Ma, Zhizhen; Dalir, Hamed; Khan, Sikandar

    2018-01-01

    Electro-optic modulation performs the conversion between the electrical and optical domain with applications in data communication for optical interconnects, but also for novel optical computing algorithms such as providing nonlinearity at the output stage of optical perceptrons in neuromorphic analog optical computing. While resembling an optical transistor, the weak light-matter-interaction makes modulators 105 times larger compared to their electronic counterparts. Since the clock frequency for photonics on-chip has a power-overhead sweet-spot around tens of GHz, ultrafast modulation may only be required in long-distance communication, not for short on-chip links. Hence, the search is open for power-efficient on-chip modulators beyond the solutions offered by foundries to date. Here, we show scaling vectors towards atto-Joule per bit efficient modulators on-chip as well as some experimental demonstrations of novel plasmonic modulators with sub-fJ/bit efficiencies. Our parametric study of placing different actively modulated materials into plasmonic versus photonic optical modes shows that 2D materials overcompensate their miniscule modal overlap by their unity-high index change. Furthermore, we reveal that the metal used in plasmonic-based modulators not only serves as an electrical contact, but also enables low electrical series resistances leading to near-ideal capacitors. We then discuss the first experimental demonstration of a photon-plasmon-hybrid graphene-based electro-absorption modulator on silicon. The device shows a sub-1 V steep switching enabled by near-ideal electrostatics delivering a high 0.05 dB V-1 μm-1 performance requiring only 110 aJ/bit. Improving on this demonstration, we discuss a plasmonic slot-based graphene modulator design, where the polarization of the plasmonic mode aligns with graphene’s in-plane dimension; where a push-pull dual-gating scheme enables 2 dB V-1 μm-1 efficient modulation allowing the device to be just 770 nm short for 3 dB small signal modulation. Lastly, comparing the switching energy of transistors to modulators shows that modulators based on emerging materials and plasmonic-silicon hybrid integration perform on-par relative to their electronic counter parts. This in turn allows for a device-enabled two orders-of-magnitude improvement of electrical-optical co-integrated network-on-chips over electronic-only architectures. The latter opens technological opportunities in cognitive computing, dynamic data-driven applications systems, and optical analog computer engines including neuromorphic photonic computing.

  5. World Energy Projection System Plus (WEPS ): Global Activity Module

    EIA Publications

    2016-01-01

    The World Energy Projection System Plus (WEPS ) is a comprehensive, mid?term energy forecasting and policy analysis tool used by EIA. WEPS projects energy supply, demand, and prices by country or region, given assumptions about the state of various economies, international energy markets, and energy policies. The Global Activity Module (GLAM) provides projections of economic driver variables for use by the supply, demand, and conversion modules of WEPS . GLAM’s baseline economic projection contains the economic assumptions used in WEPS to help determine energy demand and supply. GLAM can also provide WEPS with alternative economic assumptions representing a range of uncertainty about economic growth. The resulting economic impacts of such assumptions are inputs to the remaining supply and demand modules of WEPS .

  6. A near-infrared fluorescence-based surgical navigation system imaging software for sentinel lymph node detection

    NASA Astrophysics Data System (ADS)

    Ye, Jinzuo; Chi, Chongwei; Zhang, Shuang; Ma, Xibo; Tian, Jie

    2014-02-01

    Sentinel lymph node (SLN) in vivo detection is vital in breast cancer surgery. A new near-infrared fluorescence-based surgical navigation system (SNS) imaging software, which has been developed by our research group, is presented for SLN detection surgery in this paper. The software is based on the fluorescence-based surgical navigation hardware system (SNHS) which has been developed in our lab, and is designed specifically for intraoperative imaging and postoperative data analysis. The surgical navigation imaging software consists of the following software modules, which mainly include the control module, the image grabbing module, the real-time display module, the data saving module and the image processing module. And some algorithms have been designed to achieve the performance of the software, for example, the image registration algorithm based on correlation matching. Some of the key features of the software include: setting the control parameters of the SNS; acquiring, display and storing the intraoperative imaging data in real-time automatically; analysis and processing of the saved image data. The developed software has been used to successfully detect the SLNs in 21 cases of breast cancer patients. In the near future, we plan to improve the software performance and it will be extensively used for clinical purpose.

  7. A charge- and energy-conserving implicit, electrostatic particle-in-cell algorithm on mapped computational meshes

    NASA Astrophysics Data System (ADS)

    Chacón, L.; Chen, G.; Barnes, D. C.

    2013-01-01

    We describe the extension of the recent charge- and energy-conserving one-dimensional electrostatic particle-in-cell algorithm in Ref. [G. Chen, L. Chacón, D.C. Barnes, An energy- and charge-conserving, implicit electrostatic particle-in-cell algorithm, Journal of Computational Physics 230 (2011) 7018-7036] to mapped (body-fitted) computational meshes. The approach maintains exact charge and energy conservation properties. Key to the algorithm is a hybrid push, where particle positions are updated in logical space, while velocities are updated in physical space. The effectiveness of the approach is demonstrated with a challenging numerical test case, the ion acoustic shock wave. The generalization of the approach to multiple dimensions is outlined.

  8. Modulation Recognition Algorithms for Intentional Modulation on Pulse (IMOP) Applications

    DTIC Science & Technology

    2001-12-01

    and a radar signal. In order to obtam mformatwn on communicatiOn signals, we studied the Techmcal and .\\dmmt~trattvc Frequency List (TAFL), a...Press, London, 1990 DREOTR2001-111 35 P517628.PDF [Page: 51 of 56] 36 AnnexA Technical and Administrative Frequency List Information Only

  9. Single Axis Flywheel IPACS @1300W, 0.8 N-m

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph; Kenny, Barbara; Kascak, Peter; Dever, Tim; Santiago, Walter

    2005-01-01

    NASA Glenn Research Center is developing flywheels for space systems. A single axis laboratory version of an integrated power and attitude control (IPACs) system has been experimentally demonstrated. This is a significant step on the road to a flight qualified three axes IPACS system. The presentation outlines the flywheel development process at NASA GRC, the experimental hardware and approach, the IPACS control algorithm that was formulated and the results of the test program and then proposes a direction for future work. GRC has made progress on flywheel module design in terms of specific energy density and capability through a design and test program resulting in three flywheel module designs. Two of the flywheels are used in the 1D-IPACS experiment with loads and power sources to simulate a satellite power system. The system response is measured in three power modes: charge, discharge, and charge reduction while simultaneously producing a net output torque which could be used for attitude control. Finally, recommendations are made for steps that should be taken to evolve from this laboratory demonstration to a flight like system.

  10. Wind noise in hearing aids: I. Effect of wide dynamic range compression and modulation-based noise reduction.

    PubMed

    Chung, King

    2012-01-01

    The objectives of this study were: (1) to examine the effect of wide dynamic range compression (WDRC) and modulation-based noise reduction (NR) algorithms on wind noise levels at the hearing aid output; and (2) to derive effective strategies for clinicians and engineers to reduce wind noise in hearing aids. Three digital hearing aids were fitted to KEMAR. The noise output was recorded at flow velocities of 0, 4.5, 9.0, and 13.5 m/s in a wind tunnel as the KEMAR head was turned from 0° to 360°. Flow noise levels were compared between the 1:1 linear and 3:1 WDRC conditions, and between NR-activated and NR-deactivated conditions when the hearing aid was programmed to the directional and omnidirectional modes. The results showed that: (1) WDRC increased low-level noise and reduced high-level noise; and (2) different noise reduction algorithms provided different amounts of wind noise reduction in different microphone modes, frequency regions, flow velocities, and head angles. Wind noise can be reduced by decreasing the gain for low-level inputs, increasing the compression ratio for high-level inputs, and activating modulation-based noise reduction algorithms.

  11. A digital clock recovery algorithm based on chromatic dispersion and polarization mode dispersion feedback dual phase detection for coherent optical transmission systems

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Xin, Xiangjun; Zhang, Lijia; Wang, Fu; Zhang, Qi

    2018-02-01

    A new feedback symbol timing recovery technique using timing estimation joint equalization is proposed for digital receivers with two samples/symbol or higher sampling rate. Different from traditional methods, the clock recovery algorithm in this paper adopts another algorithm distinguishing the phases of adjacent symbols, so as to accurately estimate the timing offset based on the adjacent signals with the same phase. The addition of the module for eliminating phase modulation interference before timing estimation further reduce the variance, thus resulting in a smoothed timing estimate. The Mean Square Error (MSE) and Bit Error Rate (BER) of the resulting timing estimate are simulated to allow a satisfactory estimation performance. The obtained clock tone performance is satisfactory for MQAM modulation formats and the Roll-off Factor (ROF) close to 0. In the back-to-back system, when ROF= 0, the maximum of MSE obtained with the proposed approach reaches 0 . 0125. After 100-km fiber transmission, BER decreases to 10-3 with ROF= 0 and OSNR = 11 dB. With the increase in ROF, the performances of MSE and BER become better.

  12. Power system security enhancement through direct non-disruptive load control

    NASA Astrophysics Data System (ADS)

    Ramanathan, Badri Narayanan

    The transition to a competitive market structure raises significant concerns regarding reliability of the power grid. A need to build tools for security assessment that produce operating limit boundaries for both static and dynamic contingencies is recognized. Besides, an increase in overall uncertainty in operating conditions makes corrective actions at times ineffective leaving the system vulnerable to instability. The tools that are in place for stability enhancement are mostly corrective and suffer from lack of robustness to operating condition changes. They often pose serious coordination challenges. With deregulation, there have also been ownership and responsibility issues associated with stability controls. However, the changing utility business model and the developments in enabling technologies such as two-way communication, metering, and control open up several new possibilities for power system security enhancement. This research proposes preventive modulation of selected loads through direct control for power system security enhancement. Two main contributions of this research are the following: development of an analysis framework and two conceptually different analysis approaches for load modulation to enhance oscillatory stability, and the development and study of algorithms for real-time modulation of thermostatic loads. The underlying analysis framework is based on the Structured Singular Value (SSV or mu) theory. Based on the above framework, two fundamentally different approaches towards analysis of the amount of load modulation for desired stability performance have been developed. Both the approaches have been tested on two different test systems: CIGRE Nordic test system and an equivalent of the Western Electric Coordinating Council test system. This research also develops algorithms for real-time modulation of thermostatic loads that use the results of the analysis. In line with some recent load management programs executed by utilities, two different algorithms based on dynamic programming are proposed for air-conditioner loads, while a decision-tree based algorithm is proposed for water-heater loads. An optimization framework has been developed employing the above algorithms. Monte Carlo simulations have been performed using this framework with the objective of studying the impact of different parameters and constraints on the effectiveness as well as the effect of control. The conclusions drawn from this research strongly advocate direct load control for stability enhancement from the perspectives of robustness and coordination, as well as economic viability and the developments towards availability of the institutional framework for load participation in providing system reliability services.

  13. Preliminary analyses of space radiation protection for lunar base surface systems

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Wilson, John W.; Townsend, Lawrence W.

    1989-01-01

    Radiation shielding analyses are performed for candidate lunar base habitation modules. The study primarily addresses potential hazards due to contributions from the galactic cosmic rays. The NASA Langley Research Center's high energy nucleon and heavy ion transport codes are used to compute propagation of radiation through conventional and regolith shield materials. Computed values of linear energy transfer are converted to biological dose-equivalent using quality factors established by the International Commision of Radiological Protection. Special fluxes of heavy charged particles and corresponding dosimetric quantities are computed for a series of thicknesses in various shield media and are used as an input data base for algorithms pertaining to specific shielded geometries. Dosimetric results are presented as isodose contour maps of shielded configuration interiors. The dose predictions indicate that shielding requirements are substantial, and an abbreviated uncertainty analysis shows that better definition of the space radiation environment as well as improvement in nuclear interaction cross-section data can greatly increase the accuracy of shield requirement predictions.

  14. Semi-Active Control of Precast RC Columns under Seismic Action

    NASA Astrophysics Data System (ADS)

    Caterino, Nicola; Spizzuoco, Mariacristina

    2017-10-01

    This work is inspired by the idea of dissipating seismic energy at the base of prefabricated RC columns via semi-active (SA) variable dampers exploiting the base rocking. It was performed a wide numerical campaign to investigate the seismic behaviour of a pre-cast RC column with a variable base restraint. The latter is based on the combined use of a hinge, elastic springs, and magnetorheological (MR) dampers remotely controlled according to the instantaneous response of the structural component. The MR devices are driven by a SA control algorithm purposely written to modulate the dissipative capability so as to reduce base bending moment without causing excessive displacement at the top. The proposed strategy results to be really promising, since the base restraint relaxation, that favours the base moment demand reduction, is accompanied by a high enhancement of the dissipated energy due to rocking that can be even able to reduce top displacement in respect to the “fixed base rotation” conditions.

  15. Rapid code acquisition algorithms employing PN matched filters

    NASA Technical Reports Server (NTRS)

    Su, Yu T.

    1988-01-01

    The performance of four algorithms using pseudonoise matched filters (PNMFs), for direct-sequence spread-spectrum systems, is analyzed. They are: parallel search with fix dwell detector (PL-FDD), parallel search with sequential detector (PL-SD), parallel-serial search with fix dwell detector (PS-FDD), and parallel-serial search with sequential detector (PS-SD). The operation characteristic for each detector and the mean acquisition time for each algorithm are derived. All the algorithms are studied in conjunction with the noncoherent integration technique, which enables the system to operate in the presence of data modulation. Several previous proposals using PNMF are seen as special cases of the present algorithms.

  16. Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications

    NASA Astrophysics Data System (ADS)

    Qian, Xuewen; Deng, Honggui; He, Hailang

    2017-10-01

    Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.

  17. Investigation of energy management strategies for photovoltaic systems - A predictive control algorithm

    NASA Technical Reports Server (NTRS)

    Cull, R. C.; Eltimsahy, A. H.

    1983-01-01

    The present investigation is concerned with the formulation of energy management strategies for stand-alone photovoltaic (PV) systems, taking into account a basic control algorithm for a possible predictive, (and adaptive) controller. The control system controls the flow of energy in the system according to the amount of energy available, and predicts the appropriate control set-points based on the energy (insolation) available by using an appropriate system model. Aspects of adaptation to the conditions of the system are also considered. Attention is given to a statistical analysis technique, the analysis inputs, the analysis procedure, and details regarding the basic control algorithm.

  18. Parameter identification for nonlinear aerodynamic systems

    NASA Technical Reports Server (NTRS)

    Pearson, Allan E.

    1990-01-01

    Parameter identification for nonlinear aerodynamic systems is examined. It is presumed that the underlying model can be arranged into an input/output (I/O) differential operator equation of a generic form. The algorithm estimation is especially efficient since the equation error can be integrated exactly given any I/O pair to obtain an algebraic function of the parameters. The algorithm for parameter identification was extended to the order determination problem for linear differential system. The degeneracy in a least squares estimate caused by feedback was addressed. A method of frequency analysis for determining the transfer function G(j omega) from transient I/O data was formulated using complex valued Fourier based modulating functions in contrast with the trigonometric modulating functions for the parameter estimation problem. A simulation result of applying the algorithm is given under noise-free conditions for a system with a low pass transfer function.

  19. Multitarget detection algorithm for automotive FMCW radar

    NASA Astrophysics Data System (ADS)

    Hyun, Eugin; Oh, Woo-Jin; Lee, Jong-Hun

    2012-06-01

    Today, 77 GHz FMCW (Frequency Modulation Continuous Wave) radar has strong advantages of range and velocity detection for automotive applications. However, FMCW radar brings out ghost targets and missed targets in multi-target situations. In this paper, in order to resolve these limitations, we propose an effective pairing algorithm, which consists of two steps. In the proposed method, a waveform with different slopes in two periods is used. In the 1st pairing processing, all combinations of range and velocity are obtained in each of two wave periods. In the 2nd pairing step, using the results of the 1st pairing processing, fine range and velocity are detected. In that case, we propose the range-velocity windowing technique in order to compensate for the non-ideal beat-frequency characteristic that arises due to the non-linearity of the RF module. Based on experimental results, the performance of the proposed algorithm is improved compared with that of the typical method.

  20. Vision-based surface defect inspection for thick steel plates

    NASA Astrophysics Data System (ADS)

    Yun, Jong Pil; Kim, Dongseob; Kim, KyuHwan; Lee, Sang Jun; Park, Chang Hyun; Kim, Sang Woo

    2017-05-01

    There are several types of steel products, such as wire rods, cold-rolled coils, hot-rolled coils, thick plates, and electrical sheets. Surface stains on cold-rolled coils are considered defects. However, surface stains on thick plates are not considered defects. A conventional optical structure is composed of a camera and lighting module. A defect inspection system that uses a dual lighting structure to distinguish uneven defects and color changes by surface noise is proposed. In addition, an image processing algorithm that can be used to detect defects is presented in this paper. The algorithm consists of a Gabor filter that detects the switching pattern and employs the binarization method to extract the shape of the defect. The optics module and detection algorithm optimized using a simulator were installed at a real plant, and the experimental results conducted on thick steel plate images obtained from the steel production line show the effectiveness of the proposed method.

  1. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  2. Pennsylvania's Energy Curriculum for the Secondary Grades: Informational Module.

    ERIC Educational Resources Information Center

    Pennsylvania State Dept. of Education, Harrisburg.

    Pennsylvania's Department of Education provides eight energy education modules that cover different secondary school disciplines. This introductory publication is designed to accompany each of the eight subject-area modules. It contains background information for teachers on topics ranging from energy's definition and past uses to nuclear waste…

  3. Breadth-First Search-Based Single-Phase Algorithms for Bridge Detection in Wireless Sensor Networks

    PubMed Central

    Akram, Vahid Khalilpour; Dagdeviren, Orhan

    2013-01-01

    Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times. PMID:23845930

  4. Hybrid three-dimensional and support vector machine approach for automatic vehicle tracking and classification using a single camera

    NASA Astrophysics Data System (ADS)

    Kachach, Redouane; Cañas, José María

    2016-05-01

    Using video in traffic monitoring is one of the most active research domains in the computer vision community. TrafficMonitor, a system that employs a hybrid approach for automatic vehicle tracking and classification on highways using a simple stationary calibrated camera, is presented. The proposed system consists of three modules: vehicle detection, vehicle tracking, and vehicle classification. Moving vehicles are detected by an enhanced Gaussian mixture model background estimation algorithm. The design includes a technique to resolve the occlusion problem by using a combination of two-dimensional proximity tracking algorithm and the Kanade-Lucas-Tomasi feature tracking algorithm. The last module classifies the shapes identified into five vehicle categories: motorcycle, car, van, bus, and truck by using three-dimensional templates and an algorithm based on histogram of oriented gradients and the support vector machine classifier. Several experiments have been performed using both real and simulated traffic in order to validate the system. The experiments were conducted on GRAM-RTM dataset and a proper real video dataset which is made publicly available as part of this work.

  5. Periodic modulation-based stochastic resonance algorithm applied to quantitative analysis for weak liquid chromatography-mass spectrometry signal of granisetron in plasma

    NASA Astrophysics Data System (ADS)

    Xiang, Suyun; Wang, Wei; Xiang, Bingren; Deng, Haishan; Xie, Shaofei

    2007-05-01

    The periodic modulation-based stochastic resonance algorithm (PSRA) was used to amplify and detect the weak liquid chromatography-mass spectrometry (LC-MS) signal of granisetron in plasma. In the algorithm, the stochastic resonance (SR) was achieved by introducing an external periodic force to the nonlinear system. The optimization of parameters was carried out in two steps to give attention to both the signal-to-noise ratio (S/N) and the peak shape of output signal. By applying PSRA with the optimized parameters, the signal-to-noise ratio of LC-MS peak was enhanced significantly and distorted peak shape that often appeared in the traditional stochastic resonance algorithm was corrected by the added periodic force. Using the signals enhanced by PSRA, this method extended the limit of detection (LOD) and limit of quantification (LOQ) of granisetron in plasma from 0.05 and 0.2 ng/mL, respectively, to 0.01 and 0.02 ng/mL, and exhibited good linearity, accuracy and precision, which ensure accurate determination of the target analyte.

  6. Investigation on Multiple Algorithms for Multi-Objective Optimization of Gear Box

    NASA Astrophysics Data System (ADS)

    Ananthapadmanabhan, R.; Babu, S. Arun; Hareendranath, KR; Krishnamohan, C.; Krishnapillai, S.; A, Krishnan

    2016-09-01

    The field of gear design is an extremely important area in engineering. In this work a spur gear reduction unit is considered. A review of relevant literatures in the area of gear design indicates that compact design of gearbox involves a complicated engineering analysis. This work deals with the simultaneous optimization of the power and dimensions of a gearbox, which are of conflicting nature. The focus is on developing a design space which is based on module, pinion teeth and face-width by using MATLAB. The feasible points are obtained through different multi-objective algorithms using various constraints obtained from different novel literatures. Attention has been devoted in various novel constraints like critical scoring criterion number, flash temperature, minimum film thickness, involute interference and contact ratio. The output from various algorithms like genetic algorithm, fmincon (constrained nonlinear minimization), NSGA-II etc. are compared to generate the best result. Hence, this is a much more precise approach for obtaining practical values of the module, pinion teeth and face-width for a minimum centre distance and a maximum power transmission for any given material.

  7. Modulation aware cluster size optimisation in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Sriram Naik, M.; Kumar, Vinay

    2017-07-01

    Wireless sensor networks (WSNs) play a great role because of their numerous advantages to the mankind. The main challenge with WSNs is the energy efficiency. In this paper, we have focused on the energy minimisation with the help of cluster size optimisation along with consideration of modulation effect when the nodes are not able to communicate using baseband communication technique. Cluster size optimisations is important technique to improve the performance of WSNs. It provides improvement in energy efficiency, network scalability, network lifetime and latency. We have proposed analytical expression for cluster size optimisation using traditional sensing model of nodes for square sensing field with consideration of modulation effects. Energy minimisation can be achieved by changing the modulation schemes such as BPSK, 16-QAM, QPSK, 64-QAM, etc., so we are considering the effect of different modulation techniques in the cluster formation. The nodes in the sensing fields are random and uniformly deployed. It is also observed that placement of base station at centre of scenario enables very less number of modulation schemes to work in energy efficient manner but when base station placed at the corner of the sensing field, it enable large number of modulation schemes to work in energy efficient manner.

  8. Energy spectrum control for modulated proton beams.

    PubMed

    Hsi, Wen C; Moyers, Michael F; Nichiporov, Dmitri; Anferov, Vladimir; Wolanski, Mark; Allgower, Chris E; Farr, Jonathan B; Mascia, Anthony E; Schreuder, Andries N

    2009-06-01

    In proton therapy delivered with range modulated beams, the energy spectrum of protons entering the delivery nozzle can affect the dose uniformity within the target region and the dose gradient around its periphery. For a cyclotron with a fixed extraction energy, a rangeshifter is used to change the energy but this produces increasing energy spreads for decreasing energies. This study investigated the magnitude of the effects of different energy spreads on dose uniformity and distal edge dose gradient and determined the limits for controlling the incident spectrum. A multilayer Faraday cup (MLFC) was calibrated against depth dose curves measured in water for nonmodulated beams with various incident spectra. Depth dose curves were measured in a water phantom and in a multilayer ionization chamber detector for modulated beams using different incident energy spreads. Some nozzle entrance energy spectra can produce unacceptable dose nonuniformities of up to +/-21% over the modulated region. For modulated beams and small beam ranges, the width of the distal penumbra can vary by a factor of 2.5. When the energy spread was controlled within the defined limits, the dose nonuniformity was less than +/-3%. To facilitate understanding of the results, the data were compared to the measured and Monte Carlo calculated data from a variable extraction energy synchrotron which has a narrow spectrum for all energies. Dose uniformity is only maintained within prescription limits when the energy spread is controlled. At low energies, a large spread can be beneficial for extending the energy range at which a single range modulator device can be used. An MLFC can be used as part of a feedback to provide specified energy spreads for different energies.

  9. The Power of Flexibility: Autonomous Agents That Conserve Energy in Commercial Buildings

    NASA Astrophysics Data System (ADS)

    Kwak, Jun-young

    Agent-based systems for energy conservation are now a growing area of research in multiagent systems, with applications ranging from energy management and control on the smart grid, to energy conservation in residential buildings, to energy generation and dynamic negotiations in distributed rural communities. Contributing to this area, my thesis presents new agent-based models and algorithms aiming to conserve energy in commercial buildings. More specifically, my thesis provides three sets of algorithmic contributions. First, I provide online predictive scheduling algorithms to handle massive numbers of meeting/event scheduling requests considering flexibility , which is a novel concept for capturing generic user constraints while optimizing the desired objective. Second, I present a novel BM-MDP ( Bounded-parameter Multi-objective Markov Decision Problem) model and robust algorithms for multi-objective optimization under uncertainty both at the planning and execution time. The BM-MDP model and its robust algorithms are useful in (re)scheduling events to achieve energy efficiency in the presence of uncertainty over user's preferences. Third, when multiple users contribute to energy savings, fair division of credit for such savings to incentivize users for their energy saving activities arises as an important question. I appeal to cooperative game theory and specifically to the concept of Shapley value for this fair division. Unfortunately, scaling up this Shapley value computation is a major hindrance in practice. Therefore, I present novel approximation algorithms to efficiently compute the Shapley value based on sampling and partitions and to speed up the characteristic function computation. These new models have not only advanced the state of the art in multiagent algorithms, but have actually been successfully integrated within agents dedicated to energy efficiency: SAVES, TESLA and THINC. SAVES focuses on the day-to-day energy consumption of individuals and groups in commercial buildings by reactively suggesting energy conserving alternatives. TESLA takes a long-range planning perspective and optimizes overall energy consumption of a large number of group events or meetings together. THINC provides an end-to-end integration within a single agent of energy efficient scheduling, rescheduling and credit allocation. While SAVES, TESLA and THINC thus differ in their scope and applicability, they demonstrate the utility of agent-based systems in actually reducing energy consumption in commercial buildings. I evaluate my algorithms and agents using extensive analysis on data from over 110,000 real meetings/events at multiple educational buildings including the main libraries at the University of Southern California. I also provide results on simulations and real-world experiments, clearly demonstrating the power of agent technology to assist human users in saving energy in commercial buildings.

  10. Algorithm Development for the Multi-Fluid Plasma Model

    DTIC Science & Technology

    2011-05-30

    392, Sep 1995. [13] L Chacon , DC Barnes, DA Knoll, and GH Miley. An implicit energy- conservative 2D Fokker-Planck algorithm. Journal of Computational...Physics, 157(2):618–653, 2000. [14] L Chacon , DC Barnes, DA Knoll, and GH Miley. An implicit energy- conservative 2D Fokker-Planck algorithm - II

  11. An Efficient Next Hop Selection Algorithm for Multi-Hop Body Area Networks

    PubMed Central

    Ayatollahitafti, Vahid; Ngadi, Md Asri; Mohamad Sharif, Johan bin; Abdullahi, Mohammed

    2016-01-01

    Body Area Networks (BANs) consist of various sensors which gather patient’s vital signs and deliver them to doctors. One of the most significant challenges faced, is the design of an energy-efficient next hop selection algorithm to satisfy Quality of Service (QoS) requirements for different healthcare applications. In this paper, a novel efficient next hop selection algorithm is proposed in multi-hop BANs. This algorithm uses the minimum hop count and a link cost function jointly in each node to choose the best next hop node. The link cost function includes the residual energy, free buffer size, and the link reliability of the neighboring nodes, which is used to balance the energy consumption and to satisfy QoS requirements in terms of end to end delay and reliability. Extensive simulation experiments were performed to evaluate the efficiency of the proposed algorithm using the NS-2 simulator. Simulation results show that our proposed algorithm provides significant improvement in terms of energy consumption, number of packets forwarded, end to end delay and packet delivery ratio compared to the existing routing protocol. PMID:26771586

  12. An analytical particle mover for the charge- and energy-conserving, nonlinearly implicit, electrostatic particle-in-cell algorithm

    NASA Astrophysics Data System (ADS)

    Chen, G.; Chacón, L.

    2013-08-01

    We propose a 1D analytical particle mover for the recent charge- and energy-conserving electrostatic particle-in-cell (PIC) algorithm in Ref. [G. Chen, L. Chacón, D.C. Barnes, An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm, Journal of Computational Physics 230 (2011) 7018-7036]. The approach computes particle orbits exactly for a given piece-wise linear electric field. The resulting PIC algorithm maintains the exact charge and energy conservation properties of the original algorithm, but with improved performance (both in efficiency and robustness against the number of particles and timestep). We demonstrate the advantageous properties of the scheme with a challenging multiscale numerical test case, the ion acoustic wave. Using the analytical mover as a reference, we demonstrate that the choice of error estimator in the Crank-Nicolson mover has significant impact on the overall performance of the implicit PIC algorithm. The generalization of the approach to the multi-dimensional case is outlined, based on a novel and simple charge conserving interpolation scheme.

  13. An Automated Energy Detection Algorithm Based on Consecutive Mean Excision

    DTIC Science & Technology

    2018-01-01

    present in the RF spectrum. 15. SUBJECT TERMS RF spectrum, detection threshold algorithm, consecutive mean excision, rank order filter , statistical...Median 4 3.1.9 Rank Order Filter (ROF) 4 3.1.10 Crest Factor (CF) 5 3.2 Statistical Summary 6 4. Algorithm 7 5. Conclusion 8 6. References 9...energy detection algorithm based on morphological filter processing with a semi- disk structure. Adelphi (MD): Army Research Laboratory (US); 2018 Jan

  14. High throughput light absorber discovery, Part 1: An algorithm for automated tauc analysis

    DOE PAGES

    Suram, Santosh K.; Newhouse, Paul F.; Gregoire, John M.

    2016-09-23

    High-throughput experimentation provides efficient mapping of composition-property relationships, and its implementation for the discovery of optical materials enables advancements in solar energy and other technologies. In a high throughput pipeline, automated data processing algorithms are often required to match experimental throughput, and we present an automated Tauc analysis algorithm for estimating band gap energies from optical spectroscopy data. The algorithm mimics the judgment of an expert scientist, which is demonstrated through its application to a variety of high throughput spectroscopy data, including the identification of indirect or direct band gaps in Fe 2O 3, Cu 2V 2O 7, and BiVOmore » 4. Here, the applicability of the algorithm to estimate a range of band gap energies for various materials is demonstrated by a comparison of direct-allowed band gaps estimated by expert scientists and by automated algorithm for 60 optical spectra.« less

  15. A dynamic data source selection system for smartwatch platform.

    PubMed

    Nemati, Ebrahim; Sideris, Konstantinos; Kalantarian, Haik; Sarrafzadeh, Majid

    2016-08-01

    A novel data source selection algorithm is proposed for ambulatory activity tracking of elderly people. The algorithm introduces the concept of dynamic switching between the data collection modules (a smartwatch and a smartphone) to improve accuracy and battery life using contextual information. We show that by making offloading decisions as a function of activity, the proposed algorithm improves power consumption and accuracy of the previous work by 7 hours and 5% respectively compared to the baseline.

  16. Sensitivity Simulation of Compressed Sensing Based Electronic Warfare Receiver Using Orthogonal Matching Pursuit Algorithm

    DTIC Science & Technology

    2016-02-01

    algorithm is used to process CS data. The insufficient nature of the sparcity of the signal adversely affects the signal detection probability for...with equal probability. The scheme was proposed [2] for image processing using single pixel camera, where the field of view was masked by a grid...modulation. The orthogonal matching pursuit (OMP) algorithm is used to process CS data. The insufficient nature of the sparcity of the signal

  17. Solving Assembly Sequence Planning using Angle Modulated Simulated Kalman Filter

    NASA Astrophysics Data System (ADS)

    Mustapa, Ainizar; Yusof, Zulkifli Md.; Adam, Asrul; Muhammad, Badaruddin; Ibrahim, Zuwairie

    2018-03-01

    This paper presents an implementation of Simulated Kalman Filter (SKF) algorithm for optimizing an Assembly Sequence Planning (ASP) problem. The SKF search strategy contains three simple steps; predict-measure-estimate. The main objective of the ASP is to determine the sequence of component installation to shorten assembly time or save assembly costs. Initially, permutation sequence is generated to represent each agent. Each agent is then subjected to a precedence matrix constraint to produce feasible assembly sequence. Next, the Angle Modulated SKF (AMSKF) is proposed for solving ASP problem. The main idea of the angle modulated approach in solving combinatorial optimization problem is to use a function, g(x), to create a continuous signal. The performance of the proposed AMSKF is compared against previous works in solving ASP by applying BGSA, BPSO, and MSPSO. Using a case study of ASP, the results show that AMSKF outperformed all the algorithms in obtaining the best solution.

  18. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  19. A new modulated Hebbian learning rule--biologically plausible method for local computation of a principal subspace.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2003-08-01

    This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.

  20. Thin-film-based CdTe photovoltaic module characterization: measurements and energy prediction improvement.

    PubMed

    Lay-Ekuakille, A; Arnesano, A; Vergallo, P

    2013-01-01

    Photovoltaic characterization is a topic of major interest in the field of renewable energy. Monocrystalline and polycrystalline modules are mostly used and, hence characterized since many laboratories have data of them. Conversely, cadmium telluride (CdTe), as thin-film module are, in some circumstances, difficult to be used for energy prediction. This work covers outdoor testing of photovoltaic modules, in particular that regarding CdTe ones. The scope is to obtain temperature coefficients that best predict the energy production. A First Solar (K-275) module has been used for the purposes of this research. Outdoor characterizations were performed at Department of Innovation Engineering, University of Salento, Lecce, Italy. The location of Lecce city represents a typical site in the South Italy. The module was exposed outdoor and tested under clear sky conditions as well as under cloudy sky ones. During testing, the global-inclined irradiance varied between 0 and 1500 W/m(2). About 37,000 I-V characteristics were acquired, allowing to process temperature coefficients as a function of irradiance and ambient temperature. The module was characterized by measuring the full temperature-irradiance matrix in the range from 50 to 1300 W/m(2) and from -1 to 40 W/m(2) from October 2011 to February 2012. Afterwards, the module energy output, under real conditions, was calculated with the "matrix method" of SUPSI-ISAAC and the results were compared with the five months energy output data of the same module measured with the outdoor energy yield facility in Lecce.

  1. Thin-film-based CdTe photovoltaic module characterization: Measurements and energy prediction improvement

    NASA Astrophysics Data System (ADS)

    Lay-Ekuakille, A.; Arnesano, A.; Vergallo, P.

    2013-01-01

    Photovoltaic characterization is a topic of major interest in the field of renewable energy. Monocrystalline and polycrystalline modules are mostly used and, hence characterized since many laboratories have data of them. Conversely, cadmium telluride (CdTe), as thin-film module are, in some circumstances, difficult to be used for energy prediction. This work covers outdoor testing of photovoltaic modules, in particular that regarding CdTe ones. The scope is to obtain temperature coefficients that best predict the energy production. A First Solar (K-275) module has been used for the purposes of this research. Outdoor characterizations were performed at Department of Innovation Engineering, University of Salento, Lecce, Italy. The location of Lecce city represents a typical site in the South Italy. The module was exposed outdoor and tested under clear sky conditions as well as under cloudy sky ones. During testing, the global-inclined irradiance varied between 0 and 1500 W/m2. About 37 000 I-V characteristics were acquired, allowing to process temperature coefficients as a function of irradiance and ambient temperature. The module was characterized by measuring the full temperature-irradiance matrix in the range from 50 to 1300 W/m2 and from -1 to 40 W/m2 from October 2011 to February 2012. Afterwards, the module energy output, under real conditions, was calculated with the "matrix method" of SUPSI-ISAAC and the results were compared with the five months energy output data of the same module measured with the outdoor energy yield facility in Lecce.

  2. Energy Conservation Curriculum for Secondary and Post-Secondary Students. Module 6: Hot Water Heating Conservation Opportunities.

    ERIC Educational Resources Information Center

    Navarro Coll., Corsicana, TX.

    This module is the sixth in a series of eleven modules in an energy conservation curriculum for secondary and postsecondary vocational students. It is designed for use by itself or as part of a sequence of four modules on understanding utilities (see also modules 3, 5, and 7). The objective of this module is to train students in the recognition,…

  3. Validation Methodology to Allow Simulated Peak Reduction and Energy Performance Analysis of Residential Building Envelope with Phase Change Materials: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.

    2012-08-01

    Phase change materials (PCM) represent a potential technology to reduce peak loads and HVAC energy consumption in residential buildings. This paper summarizes NREL efforts to obtain accurate energy simulations when PCMs are modeled in residential buildings: the overall methodology to verify and validate Conduction Finite Difference (CondFD) and PCM algorithms in EnergyPlus is presented in this study. It also shows preliminary results of three residential building enclosure technologies containing PCM: PCM-enhanced insulation, PCM impregnated drywall and thin PCM layers. The results are compared based on predicted peak reduction and energy savings using two algorithms in EnergyPlus: the PCM and Conductionmore » Finite Difference (CondFD) algorithms.« less

  4. Calibrated Noise Measurements with Induced Receiver Gain Fluctuations

    NASA Technical Reports Server (NTRS)

    Racette, Paul; Walker, David; Gu, Dazhen; Rajola, Marco; Spevacek, Ashly

    2011-01-01

    The lack of well-developed techniques for modeling changing statistical moments in our observations has stymied the application of stochastic process theory in science and engineering. These limitations were encountered when modeling the performance of radiometer calibration architectures and algorithms in the presence of non stationary receiver fluctuations. Analyses of measured signals have traditionally been limited to a single measurement series. Whereas in a radiometer that samples a set of noise references, the data collection can be treated as an ensemble set of measurements of the receiver state. Noise Assisted Data Analysis is a growing field of study with significant potential for aiding the understanding and modeling of non stationary processes. Typically, NADA entails adding noise to a signal to produce an ensemble set on which statistical analysis is performed. Alternatively as in radiometric measurements, mixing a signal with calibrated noise provides, through the calibration process, the means to detect deviations from the stationary assumption and thereby a measurement tool to characterize the signal's non stationary properties. Data sets comprised of calibrated noise measurements have been limited to those collected with naturally occurring fluctuations in the radiometer receiver. To examine the application of NADA using calibrated noise, a Receiver Gain Modulation Circuit (RGMC) was designed and built to modulate the gain of a radiometer receiver using an external signal. In 2010, an RGMC was installed and operated at the National Institute of Standards and Techniques (NIST) using their Noise Figure Radiometer (NFRad) and national standard noise references. The data collected is the first known set of calibrated noise measurements from a receiver with an externally modulated gain. As an initial step, sinusoidal and step-function signals were used to modulate the receiver gain, to evaluate the circuit characteristics and to study the performance of a variety of calibration algorithms. The receiver noise temperature and time-bandwidth product of the NFRad are calculated from the data. Statistical analysis using temporal-dependent calibration algorithms reveals that the natural occurring fluctuations in the receiver are stationary over long intervals (100s of seconds); however the receiver exhibits local non stationarity over the interval over which one set of reference measurements are collected. A variety of calibration algorithms have been applied to the data to assess algorithms' performance with the gain fluctuation signals. This presentation will describe the RGMC, experiment design and a comparative analysis of calibration algorithms.

  5. Validation of energy-weighted algorithm for radiation portal monitor using plastic scintillator.

    PubMed

    Lee, Hyun Cheol; Shin, Wook-Geun; Park, Hyo Jun; Yoo, Do Hyun; Choi, Chang-Il; Park, Chang-Su; Kim, Hong-Suk; Min, Chul Hee

    2016-01-01

    To prevent illicit tracking of radionuclides, radiation portal monitor (RPM) systems employing plastic scintillators have been used in ports and airports. However, their poor energy resolution makes the discrimination of radioactive material inaccurate. In this study, an energy weight algorithm was validated to determine (133)Ba, (22)Na, (137)Cs, and (60)Co by using a plastic scintillator. The Compton edges of energy spectra were converted to peaks based on the algorithm. The peaks have a maximum error of 6% towards the theoretical Compton edge. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Results of using the NSTX-U Plasma Control System for scenario development

    NASA Astrophysics Data System (ADS)

    Boyer, M. D.; Battaglia, D. J.; Gates, D. A.; Gerhardt, S.; Menard, J.; Mueller, D.; Myers, C. E.; Ferron, J.; Sabbagh, S.; NSTX-U Team

    2016-10-01

    To best use the new capabilities of NSTX-U (e.g., higher toroidal field and additional, more distributed heating and current drive sources) and to achieve the operational goals of the program, major upgrades to the Plasma Control System have been made. These include improvements to vertical control, real-time equilibrium reconstruction, and plasma boundary shape control and the addition of flexible algorithms for beam modulation and gas injection to control the upgraded actuators in real-time, enabling their use in algorithms for stored energy and profile control. Control system commissioning activities have so far focused on vertical position and shape control. The upgraded controllers have been used to explore the vertical stability limits in inner wall limited and diverted discharges, and control of X-point and strike point locations has been demonstrated and is routinely used. A method for controlling the mid-plane inner gap, a challenge for STs, has also been added to improve reproducible control of diverted discharges. A supervisory shutdown handling algorithm has also been commissioned to ramp the plasma down and safely turn off actuators after an event such as loss of vertical control. Use of the upgrades has contributed to achieving 1MA, 0.65T scenarios with greater than 1s pulse length. Work supported by U.S. D.O.E. Contract No. DE-AC02-09CH11466.

  7. Improved argument-FFT frequency offset estimation for QPSK coherent optical Systems

    NASA Astrophysics Data System (ADS)

    Han, Jilong; Li, Wei; Yuan, Zhilin; Li, Haitao; Huang, Liyan; Hu, Qianggao

    2016-02-01

    A frequency offset estimation (FOE) algorithm based on fast Fourier transform (FFT) of the signal's argument is investigated, which does not require removing the modulated data phase. In this paper, we analyze the flaw of the argument-FFT algorithm and propose a combined FOE algorithm, in which the absolute of frequency offset (FO) is accurately calculated by argument-FFT algorithm with a relatively large number of samples and the sign of FO is determined by FFT-based interpolation discrete Fourier transformation (DFT) algorithm with a relatively small number of samples. Compared with the previous algorithms based on argument-FFT, the proposed one has low complexity and can still effectively work with a relatively less number of samples.

  8. Implementation and performance of shutterless uncooled micro-bolometer cameras

    NASA Astrophysics Data System (ADS)

    Das, J.; de Gaspari, D.; Cornet, P.; Deroo, P.; Vermeiren, J.; Merken, P.

    2015-06-01

    A shutterless algorithm is implemented into the Xenics LWIR thermal cameras and modules. Based on a calibration set and a global temperature coefficient the optimal non-uniformity correction is calculated onboard of the camera. The limited resources in the camera require a compact algorithm, hence the efficiency of the coding is important. The performance of the shutterless algorithm is studied by a comparison of the residual non-uniformity (RNU) and signal-to-noise ratio (SNR) between the shutterless and shuttered correction algorithm. From this comparison we conclude that the shutterless correction is only slightly less performant compared to the standard shuttered algorithm, making this algorithm very interesting for thermal infrared applications where small weight and size, and continuous operation are important.

  9. A Modularity-Based Method Reveals Mixed Modules from Chemical-Gene Heterogeneous Network

    PubMed Central

    Song, Jianglong; Tang, Shihuan; Liu, Xi; Gao, Yibo; Yang, Hongjun; Lu, Peng

    2015-01-01

    For a multicomponent therapy, molecular network is essential to uncover its specific mode of action from a holistic perspective. The molecular system of a Traditional Chinese Medicine (TCM) formula can be represented by a 2-class heterogeneous network (2-HN), which typically includes chemical similarities, chemical-target interactions and gene interactions. An important premise of uncovering the molecular mechanism is to identify mixed modules from complex chemical-gene heterogeneous network of a TCM formula. We thus proposed a novel method (MixMod) based on mixed modularity to detect accurate mixed modules from 2-HNs. At first, we compared MixMod with Clauset-Newman-Moore algorithm (CNM), Markov Cluster algorithm (MCL), Infomap and Louvain on benchmark 2-HNs with known module structure. Results showed that MixMod was superior to other methods when 2-HNs had promiscuous module structure. Then these methods were tested on a real drug-target network, in which 88 disease clusters were regarded as real modules. MixMod could identify the most accurate mixed modules from the drug-target 2-HN (normalized mutual information 0.62 and classification accuracy 0.4524). In the end, MixMod was applied to the 2-HN of Buchang naoxintong capsule (BNC) and detected 49 mixed modules. By using enrichment analysis, we investigated five mixed modules that contained primary constituents of BNC intestinal absorption liquid. As a matter of fact, the findings of in vitro experiments using BNC intestinal absorption liquid were found to highly accord with previous analysis. Therefore, MixMod is an effective method to detect accurate mixed modules from chemical-gene heterogeneous networks and further uncover the molecular mechanism of multicomponent therapies, especially TCM formulae. PMID:25927435

  10. Towards a Comprehensive Computational Simulation System for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Shih, Ming-Hsin

    1994-01-01

    The objective of this work is to develop algorithms associated with a comprehensive computational simulation system for turbomachinery flow fields. This development is accomplished in a modular fashion. These modules includes grid generation, visualization, network, simulation, toolbox, and flow modules. An interactive grid generation module is customized to facilitate the grid generation process associated with complicated turbomachinery configurations. With its user-friendly graphical user interface, the user may interactively manipulate the default settings to obtain a quality grid within a fraction of time that is usually required for building a grid about the same geometry with a general-purpose grid generation code. Non-Uniform Rational B-Spline formulations are utilized in the algorithm to maintain geometry fidelity while redistributing grid points on the solid surfaces. Bezier curve formulation is used to allow interactive construction of inner boundaries. It is also utilized to allow interactive point distribution. Cascade surfaces are transformed from three-dimensional surfaces of revolution into two-dimensional parametric planes for easy manipulation. Such a transformation allows these manipulated plane grids to be mapped to surfaces of revolution by any generatrix definition. A sophisticated visualization module is developed to al-low visualization for both grid and flow solution, steady or unsteady. A network module is built to allow data transferring in the heterogeneous environment. A flow module is integrated into this system, using an existing turbomachinery flow code. A simulation module is developed to combine the network, flow, and visualization module to achieve near real-time flow simulation about turbomachinery geometries. A toolbox module is developed to support the overall task. A batch version of the grid generation module is developed to allow portability and has been extended to allow dynamic grid generation for pitch changing turbomachinery configurations. Various applications with different characteristics are presented to demonstrate the success of this system.

  11. [Research on automatic external defibrillator based on DSP].

    PubMed

    Jing, Jun; Ding, Jingyan; Zhang, Wei; Hong, Wenxue

    2012-10-01

    Electrical defibrillation is the most effective way to treat the ventricular tachycardia (VT) and ventricular fibrillation (VF). An automatic external defibrillator based on DSP is introduced in this paper. The whole design consists of the signal collection module, the microprocessor controlingl module, the display module, the defibrillation module and the automatic recognition algorithm for VF and non VF, etc. This automatic external defibrillator has achieved goals such as ECG signal real-time acquisition, ECG wave synchronous display, data delivering to U disk and automatic defibrillate when shockable rhythm appears, etc.

  12. Optical interconnect for large-scale systems

    NASA Astrophysics Data System (ADS)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  13. Engineering evaluation of a sodium hydroxide thermal energy storage module

    NASA Technical Reports Server (NTRS)

    Perdue, D. G.; Gordon, L. H.

    1980-01-01

    An engineering evaluation of thermal energy storage prototypes was performed in order to assess the development status of latent heat storage media. The testing and the evaluation of a prototype sodium hydroxide module is described. This module stored off-peak electrical energy as heat for later conversion to domestic hot water needs.

  14. Evaluation of a new neutron energy spectrum unfolding code based on an Adaptive Neuro-Fuzzy Inference System (ANFIS).

    PubMed

    Hosseini, Seyed Abolfazl; Esmaili Paeen Afrakoti, Iman

    2018-01-17

    The purpose of the present study was to reconstruct the energy spectrum of a poly-energetic neutron source using an algorithm developed based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS is a kind of artificial neural network based on the Takagi-Sugeno fuzzy inference system. The ANFIS algorithm uses the advantages of both fuzzy inference systems and artificial neural networks to improve the effectiveness of algorithms in various applications such as modeling, control and classification. The neutron pulse height distributions used as input data in the training procedure for the ANFIS algorithm were obtained from the simulations performed by MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Taking into account the normalization condition of each energy spectrum, 4300 neutron energy spectra were generated randomly. (The value in each bin was generated randomly, and finally a normalization of each generated energy spectrum was performed). The randomly generated neutron energy spectra were considered as output data of the developed ANFIS computational code in the training step. To calculate the neutron energy spectrum using conventional methods, an inverse problem with an approximately singular response matrix (with the determinant of the matrix close to zero) should be solved. The solution of the inverse problem using the conventional methods unfold neutron energy spectrum with low accuracy. Application of the iterative algorithms in the solution of such a problem, or utilizing the intelligent algorithms (in which there is no need to solve the problem), is usually preferred for unfolding of the energy spectrum. Therefore, the main reason for development of intelligent algorithms like ANFIS for unfolding of neutron energy spectra is to avoid solving the inverse problem. In the present study, the unfolded neutron energy spectra of 252Cf and 241Am-9Be neutron sources using the developed computational code were found to have excellent agreement with the reference data. Also, the unfolded energy spectra of the neutron sources as obtained using ANFIS were more accurate than the results reported from calculations performed using artificial neural networks in previously published papers. © The Author(s) 2018. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  15. Multi-agent coordination algorithms for control of distributed energy resources in smart grids

    NASA Astrophysics Data System (ADS)

    Cortes, Andres

    Sustainable energy is a top-priority for researchers these days, since electricity and transportation are pillars of modern society. Integration of clean energy technologies such as wind, solar, and plug-in electric vehicles (PEVs), is a major engineering challenge in operation and management of power systems. This is due to the uncertain nature of renewable energy technologies and the large amount of extra load that PEVs would add to the power grid. Given the networked structure of a power system, multi-agent control and optimization strategies are natural approaches to address the various problems of interest for the safe and reliable operation of the power grid. The distributed computation in multi-agent algorithms addresses three problems at the same time: i) it allows for the handling of problems with millions of variables that a single processor cannot compute, ii) it allows certain independence and privacy to electricity customers by not requiring any usage information, and iii) it is robust to localized failures in the communication network, being able to solve problems by simply neglecting the failing section of the system. We propose various algorithms to coordinate storage, generation, and demand resources in a power grid using multi-agent computation and decentralized decision making. First, we introduce a hierarchical vehicle-one-grid (V1G) algorithm for coordination of PEVs under usage constraints, where energy only flows from the grid in to the batteries of PEVs. We then present a hierarchical vehicle-to-grid (V2G) algorithm for PEV coordination that takes into consideration line capacity constraints in the distribution grid, and where energy flows both ways, from the grid in to the batteries, and from the batteries to the grid. Next, we develop a greedy-like hierarchical algorithm for management of demand response events with on/off loads. Finally, we introduce distributed algorithms for the optimal control of distributed energy resources, i.e., generation and storage in a microgrid. The algorithms we present are provably correct and tested in simulation. Each algorithm is assumed to work on a particular network topology, and simulation studies are carried out in order to demonstrate their convergence properties to a desired solution.

  16. Extracting atmospheric turbulence and aerosol characteristics from passive imagery

    NASA Astrophysics Data System (ADS)

    Reinhardt, Colin N.; Wayne, D.; McBryde, K.; Cauble, G.

    2013-09-01

    Obtaining accurate, precise and timely information about the local atmospheric turbulence and extinction conditions and aerosol/particulate content remains a difficult problem with incomplete solutions. It has important applications in areas such as optical and IR free-space communications, imaging systems performance, and the propagation of directed energy. The capability to utilize passive imaging data to extract parameters characterizing atmospheric turbulence and aerosol/particulate conditions would represent a valuable addition to the current piecemeal toolset for atmospheric sensing. Our research investigates an application of fundamental results from optical turbulence theory and aerosol extinction theory combined with recent advances in image-quality-metrics (IQM) and image-quality-assessment (IQA) methods. We have developed an algorithm which extracts important parameters used for characterizing atmospheric turbulence and extinction along the propagation channel, such as the refractive-index structure parameter C2n , the Fried atmospheric coherence width r0 , and the atmospheric extinction coefficient βext , from passive image data. We will analyze the algorithm performance using simulations based on modeling with turbulence modulation transfer functions. An experimental field campaign was organized and data were collected from passive imaging through turbulence of Siemens star resolution targets over several short littoral paths in Point Loma, San Diego, under conditions various turbulence intensities. We present initial results of the algorithm's effectiveness using this field data and compare against measurements taken concurrently with other standard atmospheric characterization equipment. We also discuss some of the challenges encountered with the algorithm, tasks currently in progress, and approaches planned for improving the performance in the near future.

  17. Algorithm for Determination of Orion Ascent Abort Mode Achievability

    NASA Technical Reports Server (NTRS)

    Tedesco, Mark B.

    2011-01-01

    For human spaceflight missions, a launch vehicle failure poses the challenge of returning the crew safely to earth through environments that are often much more stressful than the nominal mission. Manned spaceflight vehicles require continuous abort capability throughout the ascent trajectory to protect the crew in the event of a failure of the launch vehicle. To provide continuous abort coverage during the ascent trajectory, different types of Orion abort modes have been developed. If a launch vehicle failure occurs, the crew must be able to quickly and accurately determine the appropriate abort mode to execute. Early in the ascent, while the Launch Abort System (LAS) is attached, abort mode selection is trivial, and any failures will result in a LAS abort. For failures after LAS jettison, the Service Module (SM) effectors are employed to perform abort maneuvers. Several different SM abort mode options are available depending on the current vehicle location and energy state. During this region of flight the selection of the abort mode that maximizes the survivability of the crew becomes non-trivial. To provide the most accurate and timely information to the crew and the onboard abort decision logic, on-board algorithms have been developed to propagate the abort trajectories based on the current launch vehicle performance and to predict the current abort capability of the Orion vehicle. This paper will provide an overview of the algorithm architecture for determining abort achievability as well as the scalar integration scheme that makes the onboard computation possible. Extension of the algorithm to assessing abort coverage impacts from Orion design modifications and launch vehicle trajectory modifications is also presented.

  18. Learning to Control Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Subramanian, Devika

    2004-01-01

    Advanced life support systems have many interacting processes and limited resources. Controlling and optimizing advanced life support systems presents unique challenges. In particular, advanced life support systems are nonlinear coupled dynamical systems and it is difficult for humans to take all interactions into account to design an effective control strategy. In this project. we developed several reinforcement learning controllers that actively explore the space of possible control strategies, guided by rewards from a user specified long term objective function. We evaluated these controllers using a discrete event simulation of an advanced life support system. This simulation, called BioSim, designed by Nasa scientists David Kortenkamp and Scott Bell has multiple, interacting life support modules including crew, food production, air revitalization, water recovery, solid waste incineration and power. They are implemented in a consumer/producer relationship in which certain modules produce resources that are consumed by other modules. Stores hold resources between modules. Control of this simulation is via adjusting flows of resources between modules and into/out of stores. We developed adaptive algorithms that control the flow of resources in BioSim. Our learning algorithms discovered several ingenious strategies for maximizing mission length by controlling the air and water recycling systems as well as crop planting schedules. By exploiting non-linearities in the overall system dynamics, the learned controllers easily out- performed controllers written by human experts. In sum, we accomplished three goals. We (1) developed foundations for learning models of coupled dynamical systems by active exploration of the state space, (2) developed and tested algorithms that learn to efficiently control air and water recycling processes as well as crop scheduling in Biosim, and (3) developed an understanding of the role machine learning in designing control systems for advanced life support.

  19. Effect of Fourier transform on the streaming in quantum lattice gas algorithms

    NASA Astrophysics Data System (ADS)

    Oganesov, Armen; Vahala, George; Vahala, Linda; Soe, Min

    2018-04-01

    All our previous quantum lattice gas algorithms for nonlinear physics have approximated the kinetic energy operator by streaming sequences to neighboring lattice sites. Here, the kinetic energy can be treated to all orders by Fourier transforming the kinetic energy operator with interlaced Dirac-based unitary collision operators. Benchmarking against exact solutions for the 1D nonlinear Schrodinger equation shows an extended range of parameters (soliton speeds and amplitudes) over the Dirac-based near-lattice-site streaming quantum algorithm.

  20. VLSI design of an RSA encryption/decryption chip using systolic array based architecture

    NASA Astrophysics Data System (ADS)

    Sun, Chi-Chia; Lin, Bor-Shing; Jan, Gene Eu; Lin, Jheng-Yi

    2016-09-01

    This article presents the VLSI design of a configurable RSA public key cryptosystem supporting the 512-bit, 1024-bit and 2048-bit based on Montgomery algorithm achieving comparable clock cycles of current relevant works but with smaller die size. We use binary method for the modular exponentiation and adopt Montgomery algorithm for the modular multiplication to simplify computational complexity, which, together with the systolic array concept for electric circuit designs effectively, lower the die size. The main architecture of the chip consists of four functional blocks, namely input/output modules, registers module, arithmetic module and control module. We applied the concept of systolic array to design the RSA encryption/decryption chip by using VHDL hardware language and verified using the TSMC/CIC 0.35 m 1P4 M technology. The die area of the 2048-bit RSA chip without the DFT is 3.9 × 3.9 mm2 (4.58 × 4.58 mm2 with DFT). Its average baud rate can reach 10.84 kbps under a 100 MHz clock.

  1. Manycast routing, modulation level and spectrum assignment over elastic optical networks

    NASA Astrophysics Data System (ADS)

    Luo, Xiao; Zhao, Yang; Chen, Xue; Wang, Lei; Zhang, Min; Zhang, Jie; Ji, Yuefeng; Wang, Huitao; Wang, Taili

    2017-07-01

    Manycast is a point to multi-point transmission framework that requires a subset of destination nodes successfully reached. It is particularly applicable for dealing with large amounts of data simultaneously in bandwidth-hungry, dynamic and cloud-based applications. As rapid increasing of traffics in these applications, the elastic optical networks (EONs) may be relied on to achieve high throughput manycast. In terms of finer spectrum granularity, the EONs could reach flexible accessing to network spectrum and efficient providing exact spectrum resource to demands. In this paper, we focus on the manycast routing, modulation level and spectrum assignment (MA-RMLSA) problem in EONs. Both EONs planning with static manycast traffic and EONs provisioning with dynamic manycast traffic are investigated. An integer linear programming (ILP) model is formulated to derive MA-RMLSA problem in static manycast scenario. Then corresponding heuristic algorithm called manycast routing, modulation level and spectrum assignment genetic algorithm (MA-RMLSA-GA) is proposed to adapt for both static and dynamic manycast scenarios. The MA-RMLSA-GA optimizes MA-RMLSA problem in destination nodes selection, routing light-tree constitution, modulation level allocation and spectrum resource assignment jointly, to achieve an effective improvement in network performance. Simulation results reveal that MA-RMLSA strategies offered by MA-RMLSA-GA have slightly disparity from the optimal solutions provided by ILP model in static scenario. Moreover, the results demonstrate that MA-RMLSA-GA realizes a highly efficient MA-RMLSA strategy with the lowest blocking probability in dynamic scenario compared with benchmark algorithms.

  2. TH-E-BRE-07: Development of Dose Calculation Error Predictors for a Widely Implemented Clinical Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egan, A; Laub, W

    2014-06-15

    Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies inmore » smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.« less

  3. Calculations of Electron Inelastic Mean Free Paths. XI. Data for Liquid Water for Energies from 50 eV to 30 keV

    PubMed Central

    Shinotsuka, H.; Da, B.; Tanuma, S.; Yoshikawa, H.; Powell, C. J.; Penn, D. R.

    2017-01-01

    We calculated electron inelastic mean free paths (IMFPs) for liquid water from its optical energy-loss function (ELF) for electron energies from 50 eV to 30 keV. These calculations were made with the relativistic full Penn algorithm (FPA) that has been used for previous IMFP and electron stopping-power calculations for many elemental solids. We also calculated IMFPs of water with three additional algorithms: the relativistic single-pole approximation (SPA), the relativistic simplified SPA, and the relativistic extended Mermin method. These calculations were made using the same optical ELF in order to assess any differences of the IMFPs arising from choice of the algorithm. We found good agreement among the IMFPs from the four algorithms for energies over 300 eV. For energies less than 100 eV, however, large differences became apparent. IMFPs from the relativistic TPP-2M equation for predicting IMFPs were in good agreement with IMFPs from the four algorithms for energies between 300 eV and 30 keV but there was poorer agreement for lower energies. We calculated values of the static structure factor as a function of momentum transfer from the FPA. The resulting values were in good agreement with results from first-principles calculations and with inelastic X-ray scattering spectroscopy experiments. We made comparisons of our IMFPs with earlier calculations from authors who had used different algorithms and different ELF data sets. IMFP differences could then be analyzed in terms of the algorithms and the data sets. Finally, we compared our IMFPs with measurements of IMFPs and of a related quantity, the effective attenuation length (EAL). There were large variations in the measured IMFPs and EALs (as well as their dependence on electron energy). Further measurements are therefore required to establish consistent data sets and for more detailed comparisons with calculated IMFPs. PMID:28751796

  4. Calculations of Electron Inelastic Mean Free Paths. XI. Data for Liquid Water for Energies from 50 eV to 30 keV.

    PubMed

    Shinotsuka, H; Da, B; Tanuma, S; Yoshikawa, H; Powell, C J; Penn, D R

    2017-04-01

    We calculated electron inelastic mean free paths (IMFPs) for liquid water from its optical energy-loss function (ELF) for electron energies from 50 eV to 30 keV. These calculations were made with the relativistic full Penn algorithm (FPA) that has been used for previous IMFP and electron stopping-power calculations for many elemental solids. We also calculated IMFPs of water with three additional algorithms: the relativistic single-pole approximation (SPA), the relativistic simplified SPA, and the relativistic extended Mermin method. These calculations were made using the same optical ELF in order to assess any differences of the IMFPs arising from choice of the algorithm. We found good agreement among the IMFPs from the four algorithms for energies over 300 eV. For energies less than 100 eV, however, large differences became apparent. IMFPs from the relativistic TPP-2M equation for predicting IMFPs were in good agreement with IMFPs from the four algorithms for energies between 300 eV and 30 keV but there was poorer agreement for lower energies. We calculated values of the static structure factor as a function of momentum transfer from the FPA. The resulting values were in good agreement with results from first-principles calculations and with inelastic X-ray scattering spectroscopy experiments. We made comparisons of our IMFPs with earlier calculations from authors who had used different algorithms and different ELF data sets. IMFP differences could then be analyzed in terms of the algorithms and the data sets. Finally, we compared our IMFPs with measurements of IMFPs and of a related quantity, the effective attenuation length (EAL). There were large variations in the measured IMFPs and EALs (as well as their dependence on electron energy). Further measurements are therefore required to establish consistent data sets and for more detailed comparisons with calculated IMFPs.

  5. The study on the control strategy of micro grid considering the economy of energy storage operation

    NASA Astrophysics Data System (ADS)

    Ma, Zhiwei; Liu, Yiqun; Wang, Xin; Li, Bei; Zeng, Ming

    2017-08-01

    To optimize the running of micro grid to guarantee the supply and demand balance of electricity, and to promote the utilization of renewable energy. The control strategy of micro grid energy storage system is studied. Firstly, the mixed integer linear programming model is established based on the receding horizon control. Secondly, the modified cuckoo search algorithm is proposed to calculate the model. Finally, a case study is carried out to study the signal characteristic of micro grid and batteries under the optimal control strategy, and the convergence of the modified cuckoo search algorithm is compared with others to verify the validity of the proposed model and method. The results show that, different micro grid running targets can affect the control strategy of energy storage system, which further affect the signal characteristics of the micro grid. Meanwhile, the convergent speed, computing time and the economy of the modified cuckoo search algorithm are improved compared with the traditional cuckoo search algorithm and differential evolution algorithm.

  6. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems

    PubMed Central

    Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000

  7. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.

    PubMed

    Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.

  8. A pilot study of a heuristic algorithm for novel template identification from VA electronic medical record text.

    PubMed

    Redd, Andrew M; Gundlapalli, Adi V; Divita, Guy; Carter, Marjorie E; Tran, Le-Thuy; Samore, Matthew H

    2017-07-01

    Templates in text notes pose challenges for automated information extraction algorithms. We propose a method that identifies novel templates in plain text medical notes. The identification can then be used to either include or exclude templates when processing notes for information extraction. The two-module method is based on the framework of information foraging and addresses the hypothesis that documents containing templates and the templates within those documents can be identified by common features. The first module takes documents from the corpus and groups those with common templates. This is accomplished through a binned word count hierarchical clustering algorithm. The second module extracts the templates. It uses the groupings and performs a longest common subsequence (LCS) algorithm to obtain the constituent parts of the templates. The method was developed and tested on a random document corpus of 750 notes derived from a large database of US Department of Veterans Affairs (VA) electronic medical notes. The grouping module, using hierarchical clustering, identified 23 groups with 3 documents or more, consisting of 120 documents from the 750 documents in our test corpus. Of these, 18 groups had at least one common template that was present in all documents in the group for a positive predictive value of 78%. The LCS extraction module performed with 100% positive predictive value, 94% sensitivity, and 83% negative predictive value. The human review determined that in 4 groups the template covered the entire document, with the remaining 14 groups containing a common section template. Among documents with templates, the number of templates per document ranged from 1 to 14. The mean and median number of templates per group was 5.9 and 5, respectively. The grouping method was successful in finding like documents containing templates. Of the groups of documents containing templates, the LCS module was successful in deciphering text belonging to the template and text that was extraneous. Major obstacles to improved performance included documents composed of multiple templates, templates that included other templates embedded within them, and variants of templates. We demonstrate proof of concept of the grouping and extraction method of identifying templates in electronic medical records in this pilot study and propose methods to improve performance and scaling up. Published by Elsevier Inc.

  9. Processing LiDAR Data to Predict Natural Hazards

    NASA Technical Reports Server (NTRS)

    Fairweather, Ian; Crabtree, Robert; Hager, Stacey

    2008-01-01

    ELF-Base and ELF-Hazards (wherein 'ELF' signifies 'Extract LiDAR Features' and 'LiDAR' signifies 'light detection and ranging') are developmental software modules for processing remote-sensing LiDAR data to identify past natural hazards (principally, landslides) and predict future ones. ELF-Base processes raw LiDAR data, including LiDAR intensity data that are often ignored in other software, to create digital terrain models (DTMs) and digital feature models (DFMs) with sub-meter accuracy. ELF-Hazards fuses raw LiDAR data, data from multispectral and hyperspectral optical images, and DTMs and DFMs generated by ELF-Base to generate hazard risk maps. Advanced algorithms in these software modules include line-enhancement and edge-detection algorithms, surface-characterization algorithms, and algorithms that implement innovative data-fusion techniques. The line-extraction and edge-detection algorithms enable users to locate such features as faults and landslide headwall scarps. Also implemented in this software are improved methodologies for identification and mapping of past landslide events by use of (1) accurate, ELF-derived surface characterizations and (2) three LiDAR/optical-data-fusion techniques: post-classification data fusion, maximum-likelihood estimation modeling, and hierarchical within-class discrimination. This software is expected to enable faster, more accurate forecasting of natural hazards than has previously been possible.

  10. Detection of protein complex from protein-protein interaction network using Markov clustering

    NASA Astrophysics Data System (ADS)

    Ochieng, P. J.; Kusuma, W. A.; Haryanto, T.

    2017-05-01

    Detection of complexes, or groups of functionally related proteins, is an important challenge while analysing biological networks. However, existing algorithms to identify protein complexes are insufficient when applied to dense networks of experimentally derived interaction data. Therefore, we introduced a graph clustering method based on Markov clustering algorithm to identify protein complex within highly interconnected protein-protein interaction networks. Protein-protein interaction network was first constructed to develop geometrical network, the network was then partitioned using Markov clustering to detect protein complexes. The interest of the proposed method was illustrated by its application to Human Proteins associated to type II diabetes mellitus. Flow simulation of MCL algorithm was initially performed and topological properties of the resultant network were analysed for detection of the protein complex. The results indicated the proposed method successfully detect an overall of 34 complexes with 11 complexes consisting of overlapping modules and 20 non-overlapping modules. The major complex consisted of 102 proteins and 521 interactions with cluster modularity and density of 0.745 and 0.101 respectively. The comparison analysis revealed MCL out perform AP, MCODE and SCPS algorithms with high clustering coefficient (0.751) network density and modularity index (0.630). This demonstrated MCL was the most reliable and efficient graph clustering algorithm for detection of protein complexes from PPI networks.

  11. Low-complex energy-aware image communication in visual sensor networks

    NASA Astrophysics Data System (ADS)

    Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran

    2013-10-01

    A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.

  12. An Algorithm for Timely Transmission of Solicitation Messages in RPL for Energy-Efficient Node Mobility.

    PubMed

    Park, Jihong; Kim, Ki-Hyung; Kim, Kangseok

    2017-04-19

    The IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) was proposed for various applications of IPv6 low power wireless networks. While RPL supports various routing metrics and is designed to be suitable for wireless sensor network environments, it does not consider the mobility of nodes. Therefore, there is a need for a method that is energy efficient and that provides stable and reliable data transmission by considering the mobility of nodes in RPL networks. This paper proposes an algorithm to support node mobility in RPL in an energy-efficient manner and describes its operating principle based on different scenarios. The proposed algorithm supports the mobility of nodes by dynamically adjusting the transmission interval of the messages that request the route based on the speed and direction of the motion of mobile nodes, as well as the costs between neighboring nodes. The performance of the proposed algorithm and previous algorithms for supporting node mobility were examined experimentally. From the experiment, it was observed that the proposed algorithm requires fewer messages per unit time for selecting a new parent node following the movement of a mobile node. Since fewer messages are used to select a parent node, the energy consumption is also less than that of previous algorithms.

  13. An Algorithm for Timely Transmission of Solicitation Messages in RPL for Energy-Efficient Node Mobility

    PubMed Central

    Park, Jihong; Kim, Ki-Hyung; Kim, Kangseok

    2017-01-01

    The IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) was proposed for various applications of IPv6 low power wireless networks. While RPL supports various routing metrics and is designed to be suitable for wireless sensor network environments, it does not consider the mobility of nodes. Therefore, there is a need for a method that is energy efficient and that provides stable and reliable data transmission by considering the mobility of nodes in RPL networks. This paper proposes an algorithm to support node mobility in RPL in an energy-efficient manner and describes its operating principle based on different scenarios. The proposed algorithm supports the mobility of nodes by dynamically adjusting the transmission interval of the messages that request the route based on the speed and direction of the motion of mobile nodes, as well as the costs between neighboring nodes. The performance of the proposed algorithm and previous algorithms for supporting node mobility were examined experimentally. From the experiment, it was observed that the proposed algorithm requires fewer messages per unit time for selecting a new parent node following the movement of a mobile node. Since fewer messages are used to select a parent node, the energy consumption is also less than that of previous algorithms. PMID:28422084

  14. Improvement of Frequency Locking Algorithm for Atomic Frequency Standards

    NASA Astrophysics Data System (ADS)

    Park, Young-Ho; Kang, Hoonsoo; Heyong Lee, Soo; Eon Park, Sang; Lee, Jong Koo; Lee, Ho Seong; Kwon, Taeg Yong

    2010-09-01

    The authors describe a novel method of frequency locking algorithm for atomic frequency standards. The new algorithm for locking the microwave frequency to the Ramsey resonance is compared with the old one that had been employed in the cesium atomic beam frequency standards such as NIST-7 and KRISS-1. Numerical simulations for testing the performance of the algorithm show that the new method has a noise filtering performance superior to the old one by a factor of 1.2 for the flicker signal noise and 1.4 for random-walk signal noise. The new algorithm can readily be used to enhance the frequency stability for a digital servo employing the slow square wave frequency modulation.

  15. Fire behavior simulation in Mediterranean forests using the minimum travel time algorithm

    Treesearch

    Kostas Kalabokidis; Palaiologos Palaiologou; Mark A. Finney

    2014-01-01

    Recent large wildfires in Greece exemplify the need for pre-fire burn probability assessment and possible landscape fire flow estimation to enhance fire planning and resource allocation. The Minimum Travel Time (MTT) algorithm, incorporated as FlamMap's version five module, provide valuable fire behavior functions, while enabling multi-core utilization for the...

  16. GRID: a high-resolution protein structure refinement algorithm.

    PubMed

    Chitsaz, Mohsen; Mayo, Stephen L

    2013-03-05

    The energy-based refinement of protein structures generated by fold prediction algorithms to atomic-level accuracy remains a major challenge in structural biology. Energy-based refinement is mainly dependent on two components: (1) sufficiently accurate force fields, and (2) efficient conformational space search algorithms. Focusing on the latter, we developed a high-resolution refinement algorithm called GRID. It takes a three-dimensional protein structure as input and, using an all-atom force field, attempts to improve the energy of the structure by systematically perturbing backbone dihedrals and side-chain rotamer conformations. We compare GRID to Backrub, a stochastic algorithm that has been shown to predict a significant fraction of the conformational changes that occur with point mutations. We applied GRID and Backrub to 10 high-resolution (≤ 2.8 Å) crystal structures from the Protein Data Bank and measured the energy improvements obtained and the computation times required to achieve them. GRID resulted in energy improvements that were significantly better than those attained by Backrub while expending about the same amount of computational resources. GRID resulted in relaxed structures that had slightly higher backbone RMSDs compared to Backrub relative to the starting crystal structures. The average RMSD was 0.25 ± 0.02 Å for GRID versus 0.14 ± 0.04 Å for Backrub. These relatively minor deviations indicate that both algorithms generate structures that retain their original topologies, as expected given the nature of the algorithms. Copyright © 2012 Wiley Periodicals, Inc.

  17. Artificial Neural Network applied to lightning flashes

    NASA Astrophysics Data System (ADS)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a success rate of 90%. The videos used in this experiment were acquired by seven video cameras installed in São Bernardo do Campo, Brazil, that continuously recorded lightning events during the summer. The cameras were disposed in a 360 loop, recording all data at a time resolution of 33ms. During this period, several convective storms were recorded.

  18. Systems and methods for reducing transient voltage spikes in matrix converters

    DOEpatents

    Kajouke, Lateef A.; Perisic, Milun; Ransom, Ray M.

    2013-06-11

    Systems and methods are provided for delivering energy using an energy conversion module that includes one or more switching elements. An exemplary electrical system comprises a DC interface, an AC interface, an isolation module, a first conversion module between the DC interface and the isolation module, and a second conversion module between the AC interface and the isolation module. A control module is configured to operate the first conversion module to provide an injection current to the second conversion module to reduce a magnitude of a current through a switching element of the second conversion module before opening the switching element.

  19. Interface Control Document for the EMPACT Module that Estimates Electric Power Transmission System Response to EMP-Caused Damage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werley, Kenneth Alan; Mccown, Andrew William

    The EPREP code is designed to evaluate the effects of an Electro-Magnetic Pulse (EMP) on the electric power transmission system. The EPREP code embodies an umbrella framework that allows a user to set up analysis conditions and to examine analysis results. The code links to three major physics/engineering modules. The first module describes the EM wave in space and time. The second module evaluates the damage caused by the wave on specific electric power (EP) transmission system components. The third module evaluates the consequence of the damaged network on its (reduced) ability to provide electric power to meet demand. Thismore » third module is the focus of the present paper. The EMPACT code serves as the third module. The EMPACT name denotes EMP effects on Alternating Current Transmission systems. The EMPACT algorithms compute electric power transmission network flow solutions under severely damaged network conditions. Initial solutions are often characterized by unacceptible network conditions including line overloads and bad voltages. The EMPACT code contains algorithms to adjust optimally network parameters to eliminate network problems while minimizing outages. System adjustments include automatically adjusting control equipment (generator V control, variable transformers, and variable shunts), as well as non-automatic control of generator power settings and minimal load shedding. The goal is to evaluate the minimal loss of customer load under equilibrium (steady-state) conditions during peak demand.« less

  20. Adaptively loaded IM/DD optical OFDM based on set-partitioned QAM formats.

    PubMed

    Zhao, Jian; Chen, Lian-Kuan

    2017-04-17

    We investigate the constellation design and symbol error rate (SER) of set-partitioned (SP) quadrature amplitude modulation (QAM) formats. Based on the SER analysis, we derive the adaptive bit and power loading algorithm for SP QAM based intensity-modulation direct-detection (IM/DD) orthogonal frequency division multiplexing (OFDM). We experimentally show that the proposed system significantly outperforms the conventional adaptively-loaded IM/DD OFDM and can increase the data rate from 36 Gbit/s to 42 Gbit/s in the presence of severe dispersion-induced spectral nulls after 40-km single-mode fiber. It is also shown that the adaptive algorithm greatly enhances the tolerance to fiber nonlinearity and allows for more power budget.

Top