Sample records for optimizing stack-based code

  1. Thermo-Mechanical and Electrochemistry Modeling of Planar SOFC Stacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khaleel, Mohammad A.; Recknagle, Kurtis P.; Lin, Zijing

    2002-12-01

    Modeling activities at PNNL support design and development of modular SOFC systems. The SOFC stack modeling capability at PNNL has developed to a level at which planar stack designs can be compared and optimized for startup performance. Thermal-fluids and stress modeling is being performed to predict the transient temperature distribution and to determine the thermal stresses based on the temperature distribution. Current efforts also include the development of a model for calculating current density, cell voltage, and heat production in SOFC stacks with hydrogen or other fuels. The model includes the heat generation from both Joule heating and chemical reactions.more » It also accounts for species production and destruction via mass balance. The model is being linked to the finite element code MARC to allow for the evaluation of temperatures and stresses during steady state operations.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoup, R.W.; Long, F.; Martin, T.H.

    Sandia is developing PBFA-Z, a 20-MA driver for z-pinch experiments by replacing the water lines, insulator stack, and MITLs on PBFA II with new hardware. The design of the vacuum insulator stack was dictated by the drive voltage, the electric field stress and grading requirements, the water line and MITL interface requirements, and the machine operations and maintenance requirements. The insulator stack will consist of four separate modules, each of a different design because of different voltage drive and hardware interface requirements. The shape of the components in each module, i.e., grading rings, insulator rings, flux excluders, anode and cathodemore » conductors, and the design of the water line and MITL interfaces, were optimized by using the electrostatic analysis codes, ELECTRO and JASON. The time dependent performance of the insulator stack was evaluated using IVORY, a 2-D PIC code. This paper will describe the insulator stack design and present the results of the ELECTRO and IVORY analyses.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoup, R.W.; Long, F.; Martin, T.H.

    Sandia has developed PBFA-Z, a 20-MA driver for z-pinch experiments by replacing the water lines, insulator stack. and MITLs on PBFA II with hardware of a new design. The PBFA-Z accelerator was designed to deliver 20 MA to a 15-mg z-pinch load in 100 ns. The accelerator was modeled using circuit codes to determine the time-dependent voltage and current waveforms at the input and output of the water lines, the insulator stack, and the MITLs. The design of the vacuum insulator stack was dictated by the drive voltage, the electric field stress and grading requirements, the water line and MITLmore » interface requirements, and the machine operations and maintenance requirements. The insulator stack consists of four separate modules, each of a different design because of different voltage drive and hardware interface requirements. The shape of the components in each module, i.e., grading rings, insulator rings, flux excluders, anode and cathode conductors, and the design of the water line and MITL interfaces, were optimized by using the electrostatic analysis codes, ELECTRO and JASON. The time-dependent performance of the insulator stacks was evaluated using IVORY, a 2-D PIC code. This paper will describe the insulator stack design, present the results of the ELECTRO and IVORY analyses, and show the results of the stack measurements.« less

  4. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  5. Optimization of hole generation in Ti/CFRP stacks

    NASA Astrophysics Data System (ADS)

    Ivanov, Y. N.; Pashkov, A. E.; Chashhin, N. S.

    2018-03-01

    The article aims to describe methods for improving the surface quality and hole accuracy in Ti/CFRP stacks by optimizing cutting methods and drill geometry. The research is based on the fundamentals of machine building, theory of probability, mathematical statistics, and experiment planning and manufacturing process optimization theories. Statistical processing of experiment data was carried out by means of Statistica 6 and Microsoft Excel 2010. Surface geometry in Ti stacks was analyzed using a Taylor Hobson Form Talysurf i200 Series Profilometer, and in CFRP stacks - using a Bruker ContourGT-Kl Optical Microscope. Hole shapes and sizes were analyzed using a Carl Zeiss CONTURA G2 Measuring machine, temperatures in cutting zones were recorded with a FLIR SC7000 Series Infrared Camera. Models of multivariate analysis of variance were developed. They show effects of drilling modes on surface quality and accuracy of holes in Ti/CFRP stacks. The task of multicriteria drilling process optimization was solved. Optimal cutting technologies which improve performance were developed. Methods for assessing thermal tool and material expansion effects on the accuracy of holes in Ti/CFRP/Ti stacks were developed.

  6. Solving the Container Stowage Problem (CSP) using Particle Swarm Optimization (PSO)

    NASA Astrophysics Data System (ADS)

    Matsaini; Santosa, Budi

    2018-04-01

    Container Stowage Problem (CSP) is a problem of containers arrangement into ships by considering rules such as: total weight, weight of one stack, destination, equilibrium, and placement of containers on vessel. Container stowage problem is combinatorial problem and hard to solve with enumeration technique. It is an NP-Hard Problem. Therefore, to find a solution, metaheuristics is preferred. The objective of solving the problem is to minimize the amount of shifting such that the unloading time is minimized. Particle Swarm Optimization (PSO) is proposed to solve the problem. The implementation of PSO is combined with some steps which are stack position change rules, stack changes based on destination, and stack changes based on the weight type of the stacks (light, medium, and heavy). The proposed method was applied on five different cases. The results were compared to Bee Swarm Optimization (BSO) and heuristics method. PSO provided mean of 0.87% gap and time gap of 60 second. While BSO provided mean of 2,98% gap and 459,6 second to the heuristcs.

  7. Towards a high performance geometry library for particle-detector simulations

    DOE PAGES

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...

    2015-05-22

    Thread-parallelization and single-instruction multiple data (SIMD) ”vectorisation” of software components in HEP computing has become a necessity to fully benefit from current and future computing hardware. In this context, the Geant-Vector/GPU simulation project aims to re-engineer current software for the simulation of the passage of particles through detectors in order to increase the overall event throughput. As one of the core modules in this area, the geometry library plays a central role and vectorising its algorithms will be one of the cornerstones towards achieving good CPU performance. Here, we report on the progress made in vectorising the shape primitives, asmore » well as in applying new C++ template based optimizations of existing code available in the Geant4, ROOT or USolids geometry libraries. We will focus on a presentation of our software development approach that aims to provide optimized code for all use cases of the library (e.g., single particle and many-particle APIs) and to support different architectures (CPU and GPU) while keeping the code base small, manageable and maintainable. We report on a generic and templated C++ geometry library as a continuation of the AIDA USolids project. As a result, the experience gained with these developments will be beneficial to other parts of the simulation software, such as for the optimization of the physics library, and possibly to other parts of the experiment software stack, such as reconstruction and analysis.« less

  8. Towards a high performance geometry library for particle-detector simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.

    Thread-parallelization and single-instruction multiple data (SIMD) ”vectorisation” of software components in HEP computing has become a necessity to fully benefit from current and future computing hardware. In this context, the Geant-Vector/GPU simulation project aims to re-engineer current software for the simulation of the passage of particles through detectors in order to increase the overall event throughput. As one of the core modules in this area, the geometry library plays a central role and vectorising its algorithms will be one of the cornerstones towards achieving good CPU performance. Here, we report on the progress made in vectorising the shape primitives, asmore » well as in applying new C++ template based optimizations of existing code available in the Geant4, ROOT or USolids geometry libraries. We will focus on a presentation of our software development approach that aims to provide optimized code for all use cases of the library (e.g., single particle and many-particle APIs) and to support different architectures (CPU and GPU) while keeping the code base small, manageable and maintainable. We report on a generic and templated C++ geometry library as a continuation of the AIDA USolids project. As a result, the experience gained with these developments will be beneficial to other parts of the simulation software, such as for the optimization of the physics library, and possibly to other parts of the experiment software stack, such as reconstruction and analysis.« less

  9. Reliability prediction of large fuel cell stack based on structure stress analysis

    NASA Astrophysics Data System (ADS)

    Liu, L. F.; Liu, B.; Wu, C. W.

    2017-09-01

    The aim of this paper is to improve the reliability of Proton Electrolyte Membrane Fuel Cell (PEMFC) stack by designing the clamping force and the thickness difference between the membrane electrode assembly (MEA) and the gasket. The stack reliability is directly determined by the component reliability, which is affected by the material property and contact stress. The component contact stress is a random variable because it is usually affected by many uncertain factors in the production and clamping process. We have investigated the influences of parameter variation coefficient on the probability distribution of contact stress using the equivalent stiffness model and the first-order second moment method. The optimal contact stress to make the component stay in the highest level reliability is obtained by the stress-strength interference model. To obtain the optimal contact stress between the contact components, the optimal thickness of the component and the stack clamping force are optimally designed. Finally, a detailed description is given how to design the MEA and gasket dimensions to obtain the highest stack reliability. This work can provide a valuable guidance in the design of stack structure for a high reliability of fuel cell stack.

  10. GAME: GAlaxy Machine learning for Emission lines

    NASA Astrophysics Data System (ADS)

    Ucci, G.; Ferrara, A.; Pallottini, A.; Gallerani, S.

    2018-06-01

    We present an updated, optimized version of GAME (GAlaxy Machine learning for Emission lines), a code designed to infer key interstellar medium physical properties from emission line intensities of ultraviolet /optical/far-infrared galaxy spectra. The improvements concern (a) an enlarged spectral library including Pop III stars, (b) the inclusion of spectral noise in the training procedure, and (c) an accurate evaluation of uncertainties. We extensively validate the optimized code and compare its performance against empirical methods and other available emission line codes (PYQZ and HII-CHI-MISTRY) on a sample of 62 SDSS stacked galaxy spectra and 75 observed HII regions. Very good agreement is found for metallicity. However, ionization parameters derived by GAME tend to be higher. We show that this is due to the use of too limited libraries in the other codes. The main advantages of GAME are the simultaneous use of all the measured spectral lines and the extremely short computational times. We finally discuss the code potential and limitations.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez Díez, Ana Luisa, E-mail: a.martinez@itma.es; Fraunhofer Institute for Solar Energy Systems ISE, Heidenhofstr. 2, 79110 Freiburg; Gutmann, Johannes

    In this paper, we present a concentrator system based on a stack of fluorescent concentrators (FCs) and a bifacial solar cell. Coupling bifacial solar cells to a stack of FCs increases the performance of the system and preserves its efficiency when scaled. We used an approach to optimize a fluorescent solar concentrator system design based on a stack of multiple fluorescent concentrators (FC). Seven individual fluorescent collectors (20 mm×20 mm×2 mm) were realized by in-situ polymerization and optically characterized in regard to their ability to guide light to the edges. Then, an optimization procedure based on the experimental data ofmore » the individual FCs was carried out to determine the stack configuration that maximizes the total number of photons leaving edges. Finally, two fluorescent concentrator systems were realized by attaching bifacial silicon solar cells to the optimized FC stacks: a conventional system, where FC were attached to one side of the solar cell as a reference, and the proposed bifacial configuration. It was found that for the same overall FC area, the bifacial configuration increases the short-circuit current by a factor of 2.2, which is also in agreement with theoretical considerations.« less

  12. Fast principal component analysis for stacking seismic data

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Bai, Min

    2018-04-01

    Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.

  13. Application of preconditioned alternating direction method of multipliers in depth from focal stack

    NASA Astrophysics Data System (ADS)

    Javidnia, Hossein; Corcoran, Peter

    2018-03-01

    Postcapture refocusing effect in smartphone cameras is achievable using focal stacks. However, the accuracy of this effect is totally dependent on the combination of the depth layers in the stack. The accuracy of the extended depth of field effect in this application can be improved significantly by computing an accurate depth map, which has been an open issue for decades. To tackle this issue, a framework is proposed based on a preconditioned alternating direction method of multipliers for depth from the focal stack and synthetic defocus application. In addition to its ability to provide high structural accuracy, the optimization function of the proposed framework can, in fact, converge faster and better than state-of-the-art methods. The qualitative evaluation has been done on 21 sets of focal stacks and the optimization function has been compared against five other methods. Later, 10 light field image sets have been transformed into focal stacks for quantitative evaluation purposes. Preliminary results indicate that the proposed framework has a better performance in terms of structural accuracy and optimization in comparison to the current state-of-the-art methods.

  14. Computationally efficient stochastic optimization using multiple realizations

    NASA Astrophysics Data System (ADS)

    Bayer, P.; Bürger, C. M.; Finkel, M.

    2008-02-01

    The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.

  15. Fatigue of extracted lead zirconate titanate multilayer actuators under unipolar high field electric cycling

    NASA Astrophysics Data System (ADS)

    Wang, Hong; Lee, Sung-Min; Wang, James L.; Lin, Hua-Tay

    2014-12-01

    Testing of large prototype lead zirconate titanate (PZT) stacks presents substantial technical challenges to electronic testing systems, so an alternative approach that uses subunits extracted from prototypes has been pursued. Extracted 10-layer and 20-layer plate specimens were subjected to an electric cycle test under an electric field of 3.0/0.0 kV/mm, 100 Hz to 108 cycles. The effects of measurement field level and stack size (number of PZT layers) on the fatigue responses of piezoelectric and dielectric coefficients were observed. On-line monitoring permitted examination of the fatigue response of the PZT stacks. The fatigue rate (based on on-line monitoring) and the fatigue index (based on the conductance spectrum from impedance measurement or small signal measurement) were developed to quantify the fatigue status of the PZT stacks. The controlling fatigue mechanism was analyzed against the fatigue observations. The data presented can serve as input to design optimization of PZT stacks and to operation optimization in critical applications, such as piezoelectric fuel injectors in heavy-duty diesel engines.

  16. Fatigue of extracted lead zirconate titanate multilayer actuators under unipolar high field electric cycling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hong; Lee, Sung Min; Wang, James L.

    Testing of large prototype lead zirconate titanate (PZT) stacks presents substantial technical challenges to electronic testing systems, so an alternative approach that uses subunits extracted from prototypes has been pursued. Extracted 10-layer and 20-layer plate specimens were subjected to an electric cycle test under an electric field of 3.0/0.0 kV/mm, 100 Hz to 10^8 cycles. The effects of measurement field level and stack size (number of PZT layers) on the fatigue responses of piezoelectric and dielectric coefficients were observed. On-line monitoring permitted examination of the fatigue response of the PZT stacks. The fatigue rate (based on on-line monitoring) and themore » fatigue index (based on the conductance spectrum from impedance measurement or small signal measurement) were developed to quantify the fatigue status of the PZT stacks. The controlling fatigue mechanism was analyzed against the fatigue observations. The data presented can serve as input to design optimization of PZT stacks and to operation optimization in critical applications such as piezoelectric fuel injectors in heavy-duty diesel engines.« less

  17. Fatigue of extracted lead zirconate titanate multilayer actuators under unipolar high field electric cycling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hong, E-mail: wangh@ornl.gov; Lee, Sung-Min; Wang, James L.

    Testing of large prototype lead zirconate titanate (PZT) stacks presents substantial technical challenges to electronic testing systems, so an alternative approach that uses subunits extracted from prototypes has been pursued. Extracted 10-layer and 20-layer plate specimens were subjected to an electric cycle test under an electric field of 3.0/0.0 kV/mm, 100 Hz to 10{sup 8} cycles. The effects of measurement field level and stack size (number of PZT layers) on the fatigue responses of piezoelectric and dielectric coefficients were observed. On-line monitoring permitted examination of the fatigue response of the PZT stacks. The fatigue rate (based on on-line monitoring) and the fatiguemore » index (based on the conductance spectrum from impedance measurement or small signal measurement) were developed to quantify the fatigue status of the PZT stacks. The controlling fatigue mechanism was analyzed against the fatigue observations. The data presented can serve as input to design optimization of PZT stacks and to operation optimization in critical applications, such as piezoelectric fuel injectors in heavy-duty diesel engines.« less

  18. Fatigue of extracted lead zirconate titanate multilayer actuators under unipolar high field electric cycling

    DOE PAGES

    Wang, Hong; Lee, Sung Min; Wang, James L.; ...

    2014-12-19

    Testing of large prototype lead zirconate titanate (PZT) stacks presents substantial technical challenges to electronic testing systems, so an alternative approach that uses subunits extracted from prototypes has been pursued. Extracted 10-layer and 20-layer plate specimens were subjected to an electric cycle test under an electric field of 3.0/0.0 kV/mm, 100 Hz to 10^8 cycles. The effects of measurement field level and stack size (number of PZT layers) on the fatigue responses of piezoelectric and dielectric coefficients were observed. On-line monitoring permitted examination of the fatigue response of the PZT stacks. The fatigue rate (based on on-line monitoring) and themore » fatigue index (based on the conductance spectrum from impedance measurement or small signal measurement) were developed to quantify the fatigue status of the PZT stacks. The controlling fatigue mechanism was analyzed against the fatigue observations. The data presented can serve as input to design optimization of PZT stacks and to operation optimization in critical applications such as piezoelectric fuel injectors in heavy-duty diesel engines.« less

  19. Seismic data enhancement and regularization using finite offset Common Diffraction Surface (CDS) stack

    NASA Astrophysics Data System (ADS)

    Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter

    2017-01-01

    The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.

  20. An analytical study of composite laminate lay-up using search algorithms for maximization of flexural stiffness and minimization of springback angle

    NASA Astrophysics Data System (ADS)

    Singh, Ranjan Kumar; Rinawa, Moti Lal

    2018-04-01

    The residual stresses arising in fiber-reinforced laminates during their curing in closed molds lead to changes in the composites after their removal from the molds and cooling. One of these dimensional changes of angle sections is called springback. The parameters such as lay-up, stacking sequence, material system, cure temperature, thickness etc play important role in it. In present work, it is attempted to optimize lay-up and stacking sequence for maximization of flexural stiffness and minimization of springback angle. The search algorithms are employed to obtain best sequence through repair strategy such as swap. A new search algorithm, termed as lay-up search algorithm (LSA) is also proposed, which is an extension of permutation search algorithm (PSA). The efficacy of PSA and LSA is tested on the laminates with a range of lay-ups. A computer code is developed on MATLAB implementing the above schemes. Also, the strategies for multi objective optimization using search algorithms are suggested and tested.

  1. Two-level optimization of composite wing structures based on panel genetic optimization

    NASA Astrophysics Data System (ADS)

    Liu, Boyang

    The design of complex composite structures used in aerospace or automotive vehicles presents a major challenge in terms of computational cost. Discrete choices for ply thicknesses and ply angles leads to a combinatorial optimization problem that is too expensive to solve with presently available computational resources. We developed the following methodology for handling this problem for wing structural design: we used a two-level optimization approach with response-surface approximations to optimize panel failure loads for the upper-level wing optimization. We tailored efficient permutation genetic algorithms to the panel stacking sequence design on the lower level. We also developed approach for improving continuity of ply stacking sequences among adjacent panels. The decomposition approach led to a lower-level optimization of stacking sequence with a given number of plies in each orientation. An efficient permutation genetic algorithm (GA) was developed for handling this problem. We demonstrated through examples that the permutation GAs are more efficient for stacking sequence optimization than a standard GA. Repair strategies for standard GA and the permutation GAs for dealing with constraints were also developed. The repair strategies can significantly reduce computation costs for both standard GA and permutation GA. A two-level optimization procedure for composite wing design subject to strength and buckling constraints is presented. At wing-level design, continuous optimization of ply thicknesses with orientations of 0°, 90°, and +/-45° is performed to minimize weight. At the panel level, the number of plies of each orientation (rounded to integers) and inplane loads are specified, and a permutation genetic algorithm is used to optimize the stacking sequence. The process begins with many panel genetic optimizations for a range of loads and numbers of plies of each orientation. Next, a cubic polynomial response surface is fitted to the optimum buckling load. The resulting response surface is used for wing-level optimization. In general, complex composite structures consist of several laminates. A common problem in the design of such structures is that some plies in the adjacent laminates terminate in the boundary between the laminates. These discontinuities may cause stress concentrations and may increase manufacturing difficulty and cost. We developed measures of continuity of two adjacent laminates. We studied tradeoffs between weight and continuity through a simple composite wing design. Finally, we compared the two-level optimization to a single-level optimization based on flexural lamination parameters. The single-level optimization is efficient and feasible for a wing consisting of unstiffened panels.

  2. Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules

    PubMed Central

    Javadi, Mehrdad; Ebrahimpour, Reza; Sajedin, Atena; Faridi, Soheil; Zakernejad, Shokoufeh

    2011-01-01

    This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization. PMID:22046232

  3. Stacking-sequence optimization for buckling of laminated plates by integer programming

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Walsh, Joanne L.

    1991-01-01

    Integer-programming formulations for the design of symmetric and balanced laminated plates under biaxial compression are presented. Both maximization of buckling load for a given total thickness and the minimization of total thickness subject to a buckling constraint are formulated. The design variables that define the stacking sequence of the laminate are zero-one integers. It is shown that the formulation results in a linear optimization problem that can be solved on readily available software. This is in contrast to the continuous case, where the design variables are the thicknesses of layers with specified ply orientations, and the optimization problem is nonlinear. Constraints on the stacking sequence such as a limit on the number of contiguous plies of the same orientation and limits on in-plane stiffnesses are easily accommodated. Examples are presented for graphite-epoxy plates under uniaxial and biaxial compression using a commercial software package based on the branch-and-bound algorithm.

  4. On gate stack scalability of double-gate negative-capacitance FET with ferroelectric HfO2 for energy efficient sub-0.2 V operation

    NASA Astrophysics Data System (ADS)

    Jang, Kyungmin; Saraya, Takuya; Kobayashi, Masaharu; Hiramoto, Toshiro

    2018-02-01

    We have investigated the gate stack scalability and energy efficiency of double-gate negative-capacitance FET (DGNCFET) with a CMOS-compatible ferroelectric HfO2 (FE:HfO2). Analytic model-based simulation is conducted to investigate the impacts of ferroelectric characteristic of FE:HfO2 and gate stack thickness on the I on/I off ratio of DGNCFET. DGNCFET has wider design window for the gate stack where higher I on/I off ratio can be achieved than DG classical MOSFET. Under a process-induced constraint with sub-10 nm gate length (L g), FE:HfO2-based DGNCFET still has a design point for high I on/I off ratio. With an optimized gate stack thickness for sub-10 nm L g, FE:HfO2-based DGNCFET has 2.5× higher energy efficiency than DG classical MOSFET even at ultralow operation voltage of sub-0.2 V.

  5. SiliPET: An ultra-high resolution design of a small animal PET scanner based on stacks of double-sided silicon strip detector

    NASA Astrophysics Data System (ADS)

    Di Domenico, Giovanni; Zavattini, Guido; Cesca, Nicola; Auricchio, Natalia; Andritschke, Robert; Schopper, Florian; Kanbach, Gottfried

    2007-02-01

    We investigated with Monte Carlo simulations, using the EGSNrcMP code, the capabilities of a small animal PET scanner based on four stacks of double-sided silicon strip detectors. Each stack consists of 40 silicon detectors with dimension of 60×60×1 mm 3 and 128 orthogonal strips on each side. Two coordinates of the interaction are given by the strips, whereas the third coordinate is given by the detector number in the stack. The stacks are arranged to form a box of 5×5×6 cm 3 with minor sides opened; the box represents the minimal FOV of the scanner. The performance parameters of the SiliPET scanner have been estimated giving a (positron range limited) spatial resolution of 0.52 mm FWHM, and an absolute sensitivity of 5.1% at the center of system. Preliminary results of a proof of principle measurement done with the MEGA advanced Compton imager using a ≈1 mm diameter 22Na source, showed a focal ray tracing FWHM of 1 mm.

  6. User Driven Image Stacking for ODI Data and Beyond via a Highly Customizable Web Interface

    NASA Astrophysics Data System (ADS)

    Hayashi, S.; Gopu, A.; Young, M. D.; Kotulla, R.

    2015-09-01

    While some astronomical archives have begun serving standard calibrated data products, the process of producing stacked images remains a challenge left to the end-user. The benefits of astronomical image stacking are well established, and dither patterns are recommended for almost all observing targets. Some archives automatically produce stacks of limited scientific usefulness without any fine-grained user or operator configurability. In this paper, we present PPA Stack, a web based stacking framework within the ODI - Portal, Pipeline, and Archive system. PPA Stack offers a web user interface with built-in heuristics (based on pointing, filter, and other metadata information) to pre-sort images into a set of likely stacks while still allowing the user or operator complete control over the images and parameters for each of the stacks they wish to produce. The user interface, designed using AngularJS, provides multiple views of the input dataset and parameters, all of which are synchronized in real time. A backend consisting of a Python application optimized for ODI data, wrapped around the SWarp software, handles the execution of stacking workflow jobs on Indiana University's Big Red II supercomputer, and the subsequent ingestion of the combined images back into the PPA archive. PPA Stack is designed to enable seamless integration of other stacking applications in the future, so users can select the most appropriate option for their science.

  7. Memory and Energy Optimization Strategies for Multithreaded Operating System on the Resource-Constrained Wireless Sensor Node

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng

    2015-01-01

    Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264

  8. Building Extraction Based on an Optimized Stacked Sparse Autoencoder of Structure and Training Samples Using LIDAR DSM and Optical Images.

    PubMed

    Yan, Yiming; Tan, Zhichao; Su, Nan; Zhao, Chunhui

    2017-08-24

    In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.

  9. An exact algorithm for optimal MAE stack filter design.

    PubMed

    Dellamonica, Domingos; Silva, Paulo J S; Humes, Carlos; Hirata, Nina S T; Barrera, Junior

    2007-02-01

    We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly.

  10. A broadband permeability measurement of FeTaN lamination stack by the shorted microstrip line method

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Ma, Yungui; Xu, Feng; Wang, Peng; Ong, C. K.

    2009-01-01

    In this paper, the microwave characteristics of a FeTaN lamination stack are studied with a shorted microstrip line method. The FeTaN lamination stack was fabricated by gluing 54 layers of FeTaN units with epoxy together. The FeTaN units were deposited on both sides of an 8 μm polyethylene terephthate (Mylar) film as the substrate by rf magnetron sputtering. On each side of the Mylar substrate, three 100-nm FeTaN layers are laminated with two 8 nm Al2O3 layers. The complex permeability of FeTaN lamination stack is calculated by the scattering parameters using the shorted load transmission line model based on the quasi-transverse-electromagnetic approximation. A full wave analysis combined with an optimization process is employed to determine the accurate effective permeability values. The optimized complex permeability data can be used for the microwave filter design.

  11. Ffuzz: Towards full system high coverage fuzz testing on binary executables.

    PubMed

    Zhang, Bin; Ye, Jiaxi; Bi, Xing; Feng, Chao; Tang, Chaojing

    2018-01-01

    Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool-Ffuzz-on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently.

  12. Optimization of thermoacoustic engine driven thermoacoustic refrigerator using response surface methodology

    NASA Astrophysics Data System (ADS)

    Desai, A. B.; Desai, K. P.; Naik, H. B.; Atrey, M. D.

    2017-02-01

    Thermoacoustic engines (TAEs) are devices which convert heat energy into useful acoustic work whereas thermoacoustic refrigerators (TARs) convert acoustic work into temperature gradient. These devices work without any moving component. Study presented here comprises of a combination system i.e. thermoacoustic engine driven thermoacoustic refrigerator (TADTAR). This system has no moving component and hence it is easy to fabricate but at the same time it is very challenging to design and construct optimized system with comparable performance. The work presented here aims to apply optimization technique to TADTAR in the form of response surface methodology (RSM). Significance of stack position and stack length for engine stack, stack position and stack length for refrigerator stack are investigated in current work. Results from RSM are compared with results from simulations using Design Environment for Low-amplitude Thermoacoustic Energy conversion (DeltaEC) for compliance.

  13. Heat transfer optimization for air-mist cooling between a stack of parallel plates

    NASA Astrophysics Data System (ADS)

    Issa, Roy J.

    2010-06-01

    A theoretical model is developed to predict the upper limit heat transfer between a stack of parallel plates subject to multiphase cooling by air-mist flow. The model predicts the optimal separation distance between the plates based on the development of the boundary layers for small and large separation distances, and for dilute mist conditions. Simulation results show the optimal separation distance to be strongly dependent on the liquid-to-air mass flow rate loading ratio, and reach a limit for a critical loading. For these dilute spray conditions, complete evaporation of the droplets takes place. Simulation results also show the optimal separation distance decreases with the increase in the mist flow rate. The proposed theoretical model shall lead to a better understanding of the design of fins spacing in heat exchangers where multiphase spray cooling is used.

  14. GINSU: Guaranteed Internet Stack Utilization

    DTIC Science & Technology

    2005-11-01

    Computer Architecture Data Links, Internet , Protocol Stacks 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT UNCLASSIFIED 18. SECURITY...AFRL-IF-RS-TR-2005-383 Final Technical Report November 2005 GINSU: GUARANTEED INTERNET STACK UTILIZATION Trusted... Information Systems, Inc. Sponsored by Defense Advanced Research Projects Agency DARPA Order No. ARPS APPROVED FOR PUBLIC

  15. Innovative model-based flow rate optimization for vanadium redox flow batteries

    NASA Astrophysics Data System (ADS)

    König, S.; Suriyah, M. R.; Leibfried, T.

    2016-11-01

    In this paper, an innovative approach is presented to optimize the flow rate of a 6-kW vanadium redox flow battery with realistic stack dimensions. Efficiency is derived using a multi-physics battery model and a newly proposed instantaneous efficiency determination technique. An optimization algorithm is applied to identify optimal flow rates for operation points defined by state-of-charge (SoC) and current. The proposed method is evaluated against the conventional approach of applying Faraday's first law of electrolysis, scaled to the so-called flow factor. To make a fair comparison, the flow factor is also optimized by simulating cycles with different charging/discharging currents. It is shown through the obtained results that the efficiency is increased by up to 1.2% points; in addition, discharge capacity is also increased by up to 1.0 kWh or 5.4%. Detailed loss analysis is carried out for the cycles with maximum and minimum charging/discharging currents. It is shown that the proposed method minimizes the sum of losses caused by concentration over-potential, pumping and diffusion. Furthermore, for the deployed Nafion 115 membrane, it is observed that diffusion losses increase with stack SoC. Therefore, to decrease stack SoC and lower diffusion losses, a higher flow rate during charging than during discharging is reasonable.

  16. Optimization of throughput in semipreparative chiral liquid chromatography using stacked injection.

    PubMed

    Taheri, Mohammadreza; Fotovati, Mohsen; Hosseini, Seyed-Kiumars; Ghassempour, Alireza

    2017-10-01

    An interesting mode of chromatography for preparation of pure enantiomers from pure samples is the method of stacked injection as a pseudocontinuous procedure. Maximum throughput and minimal production costs can be achieved by the use of total chiral column length in this mode of chromatography. To maximize sample loading, often touching bands of the two enantiomers is automatically achieved. Conventional equations show direct correlation between touching-band loadability and the selectivity factor of two enantiomers. The important question for one who wants to obtain the highest throughput is "How to optimize different factors including selectivity, resolution, run time, and loading of the sample in order to save time without missing the touching-band resolution?" To answer this question, tramadol and propranolol were separated on cellulose 3,5-dimethyl phenyl carbamate, as two pure racemic mixtures with low and high solubilities in mobile phase, respectively. The mobile phase composition consisted of n-hexane solvent with alcohol modifier and diethylamine as the additive. A response surface methodology based on central composite design was used to optimize separation factors against the main responses. According to the stacked injection properties, two processes were investigated for maximizing throughput: one with a poorly soluble and another with a highly soluble racemic mixture. For each case, different optimization possibilities were inspected. It was revealed that resolution is a crucial response for separations of this kind. Peak area and run time are two critical parameters in optimization of stacked injection for binary mixtures which have low solubility in the mobile phase. © 2017 Wiley Periodicals, Inc.

  17. Programmable molecular recognition based on the geometry of DNA nanostructures.

    PubMed

    Woo, Sungwook; Rothemund, Paul W K

    2011-07-10

    From ligand-receptor binding to DNA hybridization, molecular recognition plays a central role in biology. Over the past several decades, chemists have successfully reproduced the exquisite specificity of biomolecular interactions. However, engineering multiple specific interactions in synthetic systems remains difficult. DNA retains its position as the best medium with which to create orthogonal, isoenergetic interactions, based on the complementarity of Watson-Crick binding. Here we show that DNA can be used to create diverse bonds using an entirely different principle: the geometric arrangement of blunt-end stacking interactions. We show that both binary codes and shape complementarity can serve as a basis for such stacking bonds, and explore their specificity, thermodynamics and binding rules. Orthogonal stacking bonds were used to connect five distinct DNA origami. This work, which demonstrates how a single attractive interaction can be developed to create diverse bonds, may guide strategies for molecular recognition in systems beyond DNA nanostructures.

  18. Field Performance of an Optimized Stack of YBCO Square “Annuli” for a Compact NMR Magnet

    PubMed Central

    Hahn, Seungyong; Voccio, John; Bermond, Stéphane; Park, Dong-Keun; Bascuñán, Juan; Kim, Seok-Beom; Masaru, Tomita; Iwasa, Yukikazu

    2011-01-01

    The spatial field homogeneity and time stability of a trapped field generated by a stack of YBCO square plates with a center hole (square “annuli”) was investigated. By optimizing stacking of magnetized square annuli, we aim to construct a compact NMR magnet. The stacked magnet consists of 750 thin YBCO plates, each 40-mm square and 80- μm thick with a 25-mm bore, and has a Ø10 mm room-temperature access for NMR measurement. To improve spatial field homogeneity of the 750-plate stack (YP750) a three-step optimization was performed: 1) statistical selection of best plates from supply plates; 2) field homogeneity measurement of multi-plate modules; and 3) optimal assembly of the modules to maximize field homogeneity. In this paper, we present analytical and experimental results of field homogeneity and temporal stability at 77 K, performed on YP750 and those of a hybrid stack, YPB750, in which two YBCO bulk annuli, each Ø46 mm and 16-mm thick with a 25-mm bore, are added to YP750, one at the top and the other at the bottom. PMID:22081753

  19. An optimization program based on the method of feasible directions: Theory and users guide

    NASA Technical Reports Server (NTRS)

    Belegundu, Ashok D.; Berke, Laszlo; Patnaik, Surya N.

    1994-01-01

    The theory and user instructions for an optimization code based on the method of feasible directions are presented. The code was written for wide distribution and ease of attachment to other simulation software. Although the theory of the method of feasible direction was developed in the 1960's, many considerations are involved in its actual implementation as a computer code. Included in the code are a number of features to improve robustness in optimization. The search direction is obtained by solving a quadratic program using an interior method based on Karmarkar's algorithm. The theory is discussed focusing on the important and often overlooked role played by the various parameters guiding the iterations within the program. Also discussed is a robust approach for handling infeasible starting points. The code was validated by solving a variety of structural optimization test problems that have known solutions obtained by other optimization codes. It has been observed that this code is robust: it has solved a variety of problems from different starting points. However, the code is inefficient in that it takes considerable CPU time as compared with certain other available codes. Further work is required to improve its efficiency while retaining its robustness.

  20. Design of wide-angle solar-selective absorbers using aperiodic metal-dielectric stacks.

    PubMed

    Sergeant, Nicholas P; Pincon, Olivier; Agrawal, Mukul; Peumans, Peter

    2009-12-07

    Spectral control of the emissivity of surfaces is essential in applications such as solar thermal and thermophotovoltaic energy conversion in order to achieve the highest conversion efficiencies possible. We investigated the spectral performance of planar aperiodic metal-dielectric multilayer coatings for these applications. The response of the coatings was optimized for a target operational temperature using needle-optimization based on a transfer matrix approach. Excellent spectral selectivity was achieved over a wide angular range. These aperiodic metal-dielectric stacks have the potential to significantly increase the efficiency of thermophotovoltaic and solar thermal conversion systems. Optimal coatings for concentrated solar thermal conversion were modeled to have a thermal emissivity <7% at 720K while absorbing >94% of the incident light. In addition, optimized coatings for solar thermophotovoltaic applications were modeled to have thermal emissivity <16% at 1750K while absorbing >85% of the concentrated solar radiation.

  1. A novel three-dimensional image reconstruction method for near-field coded aperture single photon emission computerized tomography

    PubMed Central

    Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa

    2009-01-01

    Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769

  2. The X3LYP extended density functional accurately describes H-bonding but fails completely for stacking.

    PubMed

    Cerný, Jirí; Hobza, Pavel

    2005-04-21

    The performance of the recently introduced X3LYP density functional which was claimed to significantly improve the accuracy for H-bonded and van der Waals complexes was tested for extended H-bonded and stacked complexes (nucleic acid base pairs and amino acid pairs). In the case of planar H-bonded complexes (guanine...cytosine, adenine...thymine) the DFT results nicely agree with accurate correlated ab initio results. For the stacked pairs (uracil dimer, cytosine dimer, adenine...thymine and guanine...cytosine) the DFT fails completely and it was even not able to localize any minimum at the stacked subspace of the potential energy surface. The geometry optimization of all these stacked clusters leads systematically to the planar H-bonded pairs. The amino acid pairs were investigated in the crystal geometry. DFT again strongly underestimates the accurate correlated ab initio stabilization energies and usually it was not able to describe the stabilization of a pair. The X3LYP functional thus behaves similarly to other current functionals. Stacking of nucleic acid bases as well as interaction of amino acids was described satisfactorily by using the tight-binding DFT method, which explicitly covers the London dispersion energy.

  3. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  4. Monolithic stacked blue light-emitting diodes with polarization-enhanced tunnel junctions.

    PubMed

    Kuo, Yen-Kuang; Shih, Ya-Hsuan; Chang, Jih-Yuan; Lai, Wei-Chih; Liu, Heng; Chen, Fang-Ming; Lee, Ming-Lun; Sheu, Jinn-Kong

    2017-08-07

    Monolithic stacked InGaN light-emitting diode (LED) connected by a polarization-enhanced GaN/AlN-based tunnel junction is demonstrated experimentally in this study. The typical stacked LEDs exhibit 80% enhancement in output power compared with conventional single LEDs because of the repeated use of electrons and holes for photon generation. The typical operation voltage of stacked LEDs is higher than twice the operation voltage of single LEDs. This high operation voltage can be attributed to the non-optimal tunneling junction in stacked LEDs. In addition to the analyses of experimental results, theoretical analysis of different schemes of tunnel junctions, including diagrams of energy bands, diagrams of electric fields, and current-voltage relation curves, are investigated using numerical simulation. The results shown in this paper demonstrate the feasibility in developing cost-effective and highly efficient tunnel-junction LEDs.

  5. Optimization Design of Bipolar Plate Flow Field in PEM Stack

    NASA Astrophysics Data System (ADS)

    Wen, Ming; He, Kanghao; Li, Peilong; Yang, Lei; Deng, Li; Jiang, Fei; Yao, Yong

    2017-12-01

    A new design of bipolar plate flow field in proton exchange membrane (PEM) stack was presented to develop a high-performance transfer efficiency of the two-phase flow. Two different flow fields were studied by using numerical simulations and the performance of the flow fields was presented. the hydrodynamic properties include pressure gap between inlet and outlet, the Reynold’s number of the two types were compared based on the Navier-Stokes equations. Computer aided optimization software was implemented in the design of experiments of the preferable flow field. The design of experiments (DOE) for the favorable concept was carried out to study the hydrodynamic properties when changing the design parameters of the bipolar plate.

  6. Classifier Subset Selection for the Stacked Generalization Method Applied to Emotion Recognition in Speech

    PubMed Central

    Álvarez, Aitor; Sierra, Basilio; Arruti, Andoni; López-Gil, Juan-Miguel; Garay-Vitoria, Nestor

    2015-01-01

    In this paper, a new supervised classification paradigm, called classifier subset selection for stacked generalization (CSS stacking), is presented to deal with speech emotion recognition. The new approach consists of an improvement of a bi-level multi-classifier system known as stacking generalization by means of an integration of an estimation of distribution algorithm (EDA) in the first layer to select the optimal subset from the standard base classifiers. The good performance of the proposed new paradigm was demonstrated over different configurations and datasets. First, several CSS stacking classifiers were constructed on the RekEmozio dataset, using some specific standard base classifiers and a total of 123 spectral, quality and prosodic features computed using in-house feature extraction algorithms. These initial CSS stacking classifiers were compared to other multi-classifier systems and the employed standard classifiers built on the same set of speech features. Then, new CSS stacking classifiers were built on RekEmozio using a different set of both acoustic parameters (extended version of the Geneva Minimalistic Acoustic Parameter Set (eGeMAPS)) and standard classifiers and employing the best meta-classifier of the initial experiments. The performance of these two CSS stacking classifiers was evaluated and compared. Finally, the new paradigm was tested on the well-known Berlin Emotional Speech database. We compared the performance of single, standard stacking and CSS stacking systems using the same parametrization of the second phase. All of the classifications were performed at the categorical level, including the six primary emotions plus the neutral one. PMID:26712757

  7. Design and fabrication of silver-hydrogen cells

    NASA Technical Reports Server (NTRS)

    Klein, M. G.

    1975-01-01

    The design and fabrication of silver-hydrogen secondary cells capable of delivering higher energy densities than comparable nickel-cadmium and nickel-hydrogen cells and relatively high cycle life is presented. An experimental task utilizing single electrode pairs for the optimization of the individual electrode components, the preparation of a design for lightweight 20Ahr cells, and the fabrication of four 20Ahr cells in heavy wall test housing containing electrode stacks of the lightweight design are described. The design approach is based on the use of a single cylindrical self-contained cell with a stacked disc sequence of electrodes. The electrode stack design is based on the use of NASA- Astropower Separator Material, PPF fuel cell anodes, an intercell electrolyte reservoir concept and sintered silver electrodes. Results of performance tests are given.

  8. Joint-layer encoder optimization for HEVC scalable extensions

    NASA Astrophysics Data System (ADS)

    Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong

    2014-09-01

    Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.

  9. Ffuzz: Towards full system high coverage fuzz testing on binary executables

    PubMed Central

    2018-01-01

    Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool—Ffuzz—on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently. PMID:29791469

  10. A complete methodology towards accuracy and lot-to-lot robustness in on-product overlay metrology using flexible wavelength selection

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Kaustuve; den Boef, Arie; Noot, Marc; Adam, Omer; Grzela, Grzegorz; Fuchs, Andreas; Jak, Martin; Liao, Sax; Chang, Ken; Couraudon, Vincent; Su, Eason; Tzeng, Wilson; Wang, Cathy; Fouquet, Christophe; Huang, Guo-Tsai; Chen, Kai-Hsiung; Wang, Y. C.; Cheng, Kevin; Ke, Chih-Ming; Terng, L. G.

    2017-03-01

    The optical coupling between gratings in diffraction-based overlay triggers a swing-curve1,6 like response of the target's signal contrast and overlay sensitivity through measurement wavelengths and polarizations. This means there are distinct measurement recipes (wavelength and polarization combinations) for a given target where signal contrast and overlay sensitivity are located at the optimal parts of the swing-curve that can provide accurate and robust measurements. Some of these optimal recipes can be the ideal choices of settings for production. The user has to stay away from the non-optimal recipe choices (that are located on the undesirable parts of the swing-curve) to avoid possibilities to make overlay measurement error that can be sometimes (depending on the amount of asymmetry and stack) in the order of several "nm". To accurately identify these optimum operating areas of the swing-curve during an experimental setup, one needs to have full-flexibility in wavelength and polarization choices. In this technical publication, a diffraction-based overlay (DBO) measurement tool with many choices of wavelengths and polarizations is utilized on advanced production stacks to study swing-curves. Results show that depending on the stack and the presence of asymmetry, the swing behavior can significantly vary and a solid procedure is needed to identify a recipe during setup that is robust against variations in stack and grating asymmetry. An approach is discussed on how to use this knowledge of swing-curve to identify recipe that is not only accurate at setup, but also robust over the wafer, and wafer-to-wafer. KPIs are reported in run-time to ensure the quality / accuracy of the reading (basically acting as an error bar to overlay measurement).

  11. Developing a Hypercard-UNIX Interface for Electronic Mail Transfer

    DTIC Science & Technology

    1992-06-01

    My thanks to Greqg for his support. Many of the comments for -- the MacTCP version are his. His code is set ,%ppart by borders. on openStacK put the...HUES-ModemVersion......- *-*-*-* STACK SCRIPTi "-*-*-* on openStack put the seconds into card fid theTime of card interface hide menubar global...34Loqin" hide fid receiving put empty into cd tia msqname of card theoessaqe end openStack on closeStack global logoutme put eqFpty into card fld text of

  12. Optimized scalar promotion with load and splat SIMD instructions

    DOEpatents

    Eichenberger, Alexander E; Gschwind, Michael K; Gunnels, John A

    2013-10-29

    Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.

  13. Optimized scalar promotion with load and splat SIMD instructions

    DOEpatents

    Eichenberger, Alexandre E [Chappaqua, NY; Gschwind, Michael K [Chappaqua, NY; Gunnels, John A [Yorktown Heights, NY

    2012-08-28

    Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.

  14. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  15. Breathing dynamics based parameter sensitivity analysis of hetero-polymeric DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talukder, Srijeeta; Sen, Shrabani; Chaudhury, Pinaki, E-mail: pinakc@rediffmail.com

    We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters is estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the 14 model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction ε{sub hb}(AT) for an AT base pair and the ring factor ξ turn out to be the most sensitive parameters. In addition, the stackingmore » interaction ε{sub st}(TA-TA) for an TA-TA nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.« less

  16. Parcels v0.9: prototyping a Lagrangian ocean analysis framework for the petascale age

    NASA Astrophysics Data System (ADS)

    Lange, Michael; van Sebille, Erik

    2017-11-01

    As ocean general circulation models (OGCMs) move into the petascale age, where the output of single simulations exceeds petabytes of storage space, tools to analyse the output of these models will need to scale up too. Lagrangian ocean analysis, where virtual particles are tracked through hydrodynamic fields, is an increasingly popular way to analyse OGCM output, by mapping pathways and connectivity of biotic and abiotic particulates. However, the current software stack of Lagrangian ocean analysis codes is not dynamic enough to cope with the increasing complexity, scale and need for customization of use-cases. Furthermore, most community codes are developed for stand-alone use, making it a nontrivial task to integrate virtual particles at runtime of the OGCM. Here, we introduce the new Parcels code, which was designed from the ground up to be sufficiently scalable to cope with petascale computing. We highlight its API design that combines flexibility and customization with the ability to optimize for HPC workflows, following the paradigm of domain-specific languages. Parcels is primarily written in Python, utilizing the wide range of tools available in the scientific Python ecosystem, while generating low-level C code and using just-in-time compilation for performance-critical computation. We show a worked-out example of its API, and validate the accuracy of the code against seven idealized test cases. This version 0.9 of Parcels is focused on laying out the API, with future work concentrating on support for curvilinear grids, optimization, efficiency and at-runtime coupling with OGCMs.

  17. STGSTK: A computer code for predicting multistage axial flow compressor performance by a meanline stage stacking method

    NASA Technical Reports Server (NTRS)

    Steinke, R. J.

    1982-01-01

    A FORTRAN computer code is presented for off-design performance prediction of axial-flow compressors. Stage and compressor performance is obtained by a stage-stacking method that uses representative velocity diagrams at rotor inlet and outlet meanline radii. The code has options for: (1) direct user input or calculation of nondimensional stage characteristics; (2) adjustment of stage characteristics for off-design speed and blade setting angle; (3) adjustment of rotor deviation angle for off-design conditions; and (4) SI or U.S. customary units. Correlations from experimental data are used to model real flow conditions. Calculations are compared with experimental data.

  18. DSP code optimization based on cache

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Li, Chengcheng; Tang, Bin

    2013-03-01

    DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.

  19. Optimizing ITO for incorporation into multilayer thin film stacks for visible and NIR applications

    NASA Astrophysics Data System (ADS)

    Roschuk, Tyler; Taddeo, David; Levita, Zachary; Morrish, Alan; Brown, Douglas

    2017-05-01

    Indium Tin Oxide, ITO, is the industry standard for transparent conductive coatings. As such, the common metrics for characterizing ITO performance are its transmission and conductivity/resistivity (or sheet resistance). In spite of its recurrent use in a broad range of technological applications, the performance of ITO itself is highly variable, depending on the method of deposition and chamber conditions, and a single well defined set of properties does not exist. This poses particular challenges for the incorporation of ITO in complex optical multilayer stacks while trying to maintain electronic performance. Complicating matters further, ITO suffers increased absorption losses in the NIR - making the ability to incorporate ITO into anti-reflective stacks crucial to optimizing overall optical performance when ITO is used in real world applications. In this work, we discuss the use of ITO in multilayer thin film stacks for applications from the visible to the NIR. In the NIR, we discuss methods to analyze and fine tune the film properties to account for, and minimize, losses due to absorption and to optimize the overall transmission of the multilayer stacks. The ability to obtain high transmission while maintaining good electrical properties, specifically low resistivity, is demonstrated. Trade-offs between transmission and conductivity with variation of process parameters are discussed in light of optimizing the performance of the final optical stack and not just with consideration to the ITO film itself.

  20. High Temperature Steam Electrolysis: Demonstration of Improved Long-Term Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. E. O'Brien; X. Zhang; R. C. O'Brien

    2011-11-01

    Long-term performance is an ongoing issue for hydrogen production based on high-temperature steam electrolysis (HTSE). For commercial deployment, solid-oxide electrolysis stacks must achieve high performance with long-term degradation rates of {approx}0.5%/1000 hours or lower. Significant progress has been achieved toward this goal over the past few years. This paper will provide details of progress achieved under the Idaho National Laboratory high temperature electrolysis research program. Recent long-term stack tests have achieved high initial performance with degradation rates less than 5%/khr. These tests utilize internally manifolded stacks with electrode-supported cells. The cell material sets are optimized for the electrolysis mode ofmore » operation. Details of the cells and stacks will be provided along with details of the test apparatus, procedures, and results.« less

  1. Novel electrical energy storage system based on reversible solid oxide cells: System design and operating conditions

    NASA Astrophysics Data System (ADS)

    Wendel, C. H.; Kazempoor, P.; Braun, R. J.

    2015-02-01

    Electrical energy storage (EES) is an important component of the future electric grid. Given that no other widely available technology meets all the EES requirements, reversible (or regenerative) solid oxide cells (ReSOCs) working in both fuel cell (power producing) and electrolysis (fuel producing) modes are envisioned as a technology capable of providing highly efficient and cost-effective EES. However, there are still many challenges and questions from cell materials development to system level operation of ReSOCs that should be addressed before widespread application. This paper presents a novel system based on ReSOCs that employ a thermal management strategy of promoting exothermic methanation within the ReSOC cell-stack to provide thermal energy for the endothermic steam/CO2 electrolysis reactions during charging mode (fuel producing). This approach also serves to enhance the energy density of the stored gases. Modeling and parametric analysis of an energy storage concept is performed using a physically based ReSOC stack model coupled with thermodynamic system component models. Results indicate that roundtrip efficiencies greater than 70% can be achieved at intermediate stack temperature (680 °C) and elevated stack pressure (20 bar). The optimal operating condition arises from a tradeoff between stack efficiency and auxiliary power requirements from balance of plant hardware.

  2. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  3. Optimal design of a smart post-buckled beam actuator using bat algorithm: simulations and experiments

    NASA Astrophysics Data System (ADS)

    Mallick, Rajnish; Ganguli, Ranjan; Kumar, Ravi

    2017-05-01

    The optimized design of a smart post-buckled beam actuator (PBA) is performed in this study. A smart material based piezoceramic stack actuator is used as a prime-mover to drive the buckled beam actuator. Piezoceramic actuators are high force, small displacement devices; they possess high energy density and have high bandwidth. In this study, bench top experiments are conducted to investigate the angular tip deflections due to the PBA. A new design of a linear-to-linear motion amplification device (LX-4) is developed to circumvent the small displacement handicap of piezoceramic stack actuators. LX-4 enhances the piezoceramic actuator mechanical leverage by a factor of four. The PBA model is based on dynamic elastic stability and is analyzed using the Mathieu-Hill equation. A formal optimization is carried out using a newly developed meta-heuristic nature inspired algorithm, named as the bat algorithm (BA). The BA utilizes the echolocation capability of bats. An optimized PBA in conjunction with LX-4 generates end rotations of the order of 15° at the output end. The optimized PBA design incurs less weight and induces large end rotations, which will be useful in development of various mechanical and aerospace devices, such as helicopter trailing edge flaps, micro and nano aerial vehicles and other robotic systems.

  4. Hamiltonian approach to slip-stacking dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S. Y.; Ng, K. Y.

    Hamiltonian dynamics has been applied to study the slip-stacking dynamics. The canonical-perturbation method is employed to obtain the second-harmonic correction term in the slip-stacking Hamiltonian. The Hamiltonian approach provides a clear optimal method for choosing the slip-stacking parameter and improving stacking efficiency. The dynamics are applied specifically to the Fermilab Booster-Recycler complex. As a result, the dynamics can also be applied to other accelerator complexes.

  5. Hamiltonian approach to slip-stacking dynamics

    DOE PAGES

    Lee, S. Y.; Ng, K. Y.

    2017-06-29

    Hamiltonian dynamics has been applied to study the slip-stacking dynamics. The canonical-perturbation method is employed to obtain the second-harmonic correction term in the slip-stacking Hamiltonian. The Hamiltonian approach provides a clear optimal method for choosing the slip-stacking parameter and improving stacking efficiency. The dynamics are applied specifically to the Fermilab Booster-Recycler complex. As a result, the dynamics can also be applied to other accelerator complexes.

  6. Design, Modeling and Performance Optimization of a Novel Rotary Piezoelectric Motor

    NASA Technical Reports Server (NTRS)

    Duong, Khanh A.; Garcia, Ephrahim

    1997-01-01

    This work has demonstrated a proof of concept for a torsional inchworm type motor. The prototype motor has shown that piezoelectric stack actuators can be used for rotary inchworm motor. The discrete linear motion of piezoelectric stacks can be converted into rotary stepping motion. The stacks with its high force and displacement output are suitable actuators for use in piezoelectric motor. The designed motor is capable of delivering high torque and speed. Critical issues involving the design and operation of piezoelectric motors were studied. The tolerance between the contact shoes and the rotor has proved to be very critical to the performance of the motor. Based on the prototype motor, a waveform optimization scheme was proposed and implemented to improve the performance of the motor. The motor was successfully modeled in MATLAB. The model closely represents the behavior of the prototype motor. Using the motor model, the input waveforms were successfully optimized to improve the performance of the motor in term of speed, torque, power and precision. These optimized waveforms drastically improve the speed of the motor at different frequencies and loading conditions experimentally. The optimized waveforms also increase the level of precision of the motor. The use of the optimized waveform is a break-away from the traditional use of sinusoidal and square waves as the driving signals. This waveform optimization scheme can be applied to any inchworm motors to improve their performance. The prototype motor in this dissertation as a proof of concept was designed to be robust and large. Future motor can be designed much smaller and more efficient with lessons learned from the prototype motor.

  7. Fundamental Limits of Delay and Security in Device-to-Device Communication

    DTIC Science & Technology

    2013-01-01

    systematic MDS (maximum distance separable) codes and random binning strategies that achieve a Pareto optimal delayreconstruction tradeoff. The erasure MD...file, and a coding scheme based on erasure compression and Slepian-Wolf binning is presented. The coding scheme is shown to provide a Pareto optimal...ble) codes and random binning strategies that achieve a Pareto optimal delay- reconstruction tradeoff. The erasure MD setup is then used to propose a

  8. Metal stack optimization for low-power and high-density for N7-N5

    NASA Astrophysics Data System (ADS)

    Raghavan, P.; Firouzi, F.; Matti, L.; Debacker, P.; Baert, R.; Sherazi, S. M. Y.; Trivkovic, D.; Gerousis, V.; Dusa, M.; Ryckaert, J.; Tokei, Z.; Verkest, D.; McIntyre, G.; Ronse, K.

    2016-03-01

    One of the key challenges while scaling logic down to N7 and N5 is the requirement of self-aligned multiple patterning for the metal stack. This comes with a large cost of the backend cost and therefore a careful stack optimization is required. Various layers in the stack have different purposes and therefore their choice of pitch and number of layers is critical. Furthermore, when in ultra scaled dimensions of N7 or N5, the number of patterning options are also much larger ranging from multiple LE, EUV to SADP/SAQP. The right choice of these are also needed patterning techniques that use a full grating of wires like SADP/SAQP techniques introduce high level of metal dummies into the design. This implies a large capacitance penalty to the design therefore having large performance and power penalties. This is often mitigated with extra masking strategies. This paper discusses a holistic view of metal stack optimization from standard cell level all the way to routing and the corresponding trade-off that exist for this space.

  9. CSNS computing environment Based on OpenStack

    NASA Astrophysics Data System (ADS)

    Li, Yakang; Qi, Fazhi; Chen, Gang; Wang, Yanming; Hong, Jianshu

    2017-10-01

    Cloud computing can allow for more flexible configuration of IT resources and optimized hardware utilization, it also can provide computing service according to the real need. We are applying this computing mode to the China Spallation Neutron Source(CSNS) computing environment. So, firstly, CSNS experiment and its computing scenarios and requirements are introduced in this paper. Secondly, the design and practice of cloud computing platform based on OpenStack are mainly demonstrated from the aspects of cloud computing system framework, network, storage and so on. Thirdly, some improvments to openstack we made are discussed further. Finally, current status of CSNS cloud computing environment are summarized in the ending of this paper.

  10. Interface Optoelectronics Engineering for Mechanically Stacked Tandem Solar Cells Based on Perovskite and Silicon.

    PubMed

    Kanda, Hiroyuki; Uzum, Abdullah; Nishino, Hitoshi; Umeyama, Tomokazu; Imahori, Hiroshi; Ishikawa, Yasuaki; Uraoka, Yukiharu; Ito, Seigo

    2016-12-14

    Engineering of photonics for antireflection and electronics for extraction of the hole using 2.5 nm of a thin Au layer have been performed for two- and four-terminal tandem solar cells using CH 3 NH 3 PbI 3 perovskite (top cell) and p-type single crystal silicon (c-Si) (bottom cell) by mechanically stacking. Highly transparent connection multilayers of evaporated-Au and sputtered-ITO films were fabricated at the interface to be a point-contact tunneling junction between the rough perovskite and flat silicon solar cells. The mechanically stacked tandem solar cell with an optimized tunneling junction structure was ⟨perovskite for the top cell/Au (2.5 nm)/ITO (154 nm) stacked-on ITO (108 nm)/c-Si for the bottom cell⟩. It was confirmed the best efficiency of 13.7% and 14.4% as two- and four-terminal devices, respectively.

  11. Thermal and Power Challenges in High Performance Computing Systems

    NASA Astrophysics Data System (ADS)

    Natarajan, Venkat; Deshpande, Anand; Solanki, Sudarshan; Chandrasekhar, Arun

    2009-05-01

    This paper provides an overview of the thermal and power challenges in emerging high performance computing platforms. The advent of new sophisticated applications in highly diverse areas such as health, education, finance, entertainment, etc. is driving the platform and device requirements for future systems. The key ingredients of future platforms are vertically integrated (3D) die-stacked devices which provide the required performance characteristics with the associated form factor advantages. Two of the major challenges to the design of through silicon via (TSV) based 3D stacked technologies are (i) effective thermal management and (ii) efficient power delivery mechanisms. Some of the key challenges that are articulated in this paper include hot-spot superposition and intensification in a 3D stack, design/optimization of thermal through silicon vias (TTSVs), non-uniform power loading of multi-die stacks, efficient on-chip power delivery, minimization of electrical hotspots etc.

  12. Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.

    PubMed

    Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang

    2018-07-01

    Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.

  13. Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder

    NASA Astrophysics Data System (ADS)

    Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang

    2018-07-01

    Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval.

  14. Product code optimization for determinate state LDPC decoding in robust image transmission.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  15. Numerical optimization of perturbative coils for tokamaks

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel; Park, Jong-Kyu; Logan, Nikolas; Boozer, Allen; NSTX-U Research Team

    2014-10-01

    Numerical optimization of coils which apply three dimensional (3D) perturbative fields to tokamaks is presented. The application of perturbative 3D magnetic fields in tokamaks is now commonplace for control of error fields, resistive wall modes, resonant field drive, and neoclassical toroidal viscosity (NTV) torques. The design of such systems has focused on control of toroidal mode number, with coil shapes based on simple window-pane designs. In this work, a numerical optimization suite based on the STELLOPT 3D equilibrium optimization code is presented. The new code, IPECOPT, replaces the VMEC equilibrium code with the IPEC perturbed equilibrium code, and targets NTV torque by coupling to the PENT code. Fixed boundary optimizations of the 3D fields for the NSTX-U experiment are underway. Initial results suggest NTV torques can be driven by normal field spectrums which are not pitch-resonant with the magnetic field lines. Work has focused on driving core torque with n = 1 and edge torques with n = 3 fields. Optimizations of the coil currents for the planned NSTX-U NCC coils highlight the code's free boundary capability. This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy.

  16. Queue and stack sorting algorithm optimization and performance analysis

    NASA Astrophysics Data System (ADS)

    Qian, Mingzhu; Wang, Xiaobao

    2018-04-01

    Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.

  17. Hadoop Oriented Smart Cities Architecture.

    PubMed

    Diaconita, Vlad; Bologa, Ana-Ramona; Bologa, Razvan

    2018-04-12

    A smart city implies a consistent use of technology for the benefit of the community. As the city develops over time, components and subsystems such as smart grids, smart water management, smart traffic and transportation systems, smart waste management systems, smart security systems, or e-governance are added. These components ingest and generate a multitude of structured, semi-structured or unstructured data that may be processed using a variety of algorithms in batches, micro batches or in real-time. The ICT architecture must be able to handle the increased storage and processing needs. When vertical scaling is no longer a viable solution, Hadoop can offer efficient linear horizontal scaling, solving storage, processing, and data analyses problems in many ways. This enables architects and developers to choose a stack according to their needs and skill-levels. In this paper, we propose a Hadoop-based architectural stack that can provide the ICT backbone for efficiently managing a smart city. On the one hand, Hadoop, together with Spark and the plethora of NoSQL databases and accompanying Apache projects, is a mature ecosystem. This is one of the reasons why it is an attractive option for a Smart City architecture. On the other hand, it is also very dynamic; things can change very quickly, and many new frameworks, products and options continue to emerge as others decline. To construct an optimized, modern architecture, we discuss and compare various products and engines based on a process that takes into consideration how the products perform and scale, as well as the reusability of the code, innovations, features, and support and interest in online communities.

  18. Hadoop Oriented Smart Cities Architecture

    PubMed Central

    Bologa, Ana-Ramona; Bologa, Razvan

    2018-01-01

    A smart city implies a consistent use of technology for the benefit of the community. As the city develops over time, components and subsystems such as smart grids, smart water management, smart traffic and transportation systems, smart waste management systems, smart security systems, or e-governance are added. These components ingest and generate a multitude of structured, semi-structured or unstructured data that may be processed using a variety of algorithms in batches, micro batches or in real-time. The ICT architecture must be able to handle the increased storage and processing needs. When vertical scaling is no longer a viable solution, Hadoop can offer efficient linear horizontal scaling, solving storage, processing, and data analyses problems in many ways. This enables architects and developers to choose a stack according to their needs and skill-levels. In this paper, we propose a Hadoop-based architectural stack that can provide the ICT backbone for efficiently managing a smart city. On the one hand, Hadoop, together with Spark and the plethora of NoSQL databases and accompanying Apache projects, is a mature ecosystem. This is one of the reasons why it is an attractive option for a Smart City architecture. On the other hand, it is also very dynamic; things can change very quickly, and many new frameworks, products and options continue to emerge as others decline. To construct an optimized, modern architecture, we discuss and compare various products and engines based on a process that takes into consideration how the products perform and scale, as well as the reusability of the code, innovations, features, and support and interest in online communities. PMID:29649172

  19. The `advanced DIR-MCFC development' project, an overview

    NASA Astrophysics Data System (ADS)

    Kortbeek, P. J.; Ottervanger, R.

    An overview is given of the approach and mid-term status of the joint European `Advanced DIR-MCFC Development' project, in which BCN, BG plc, GDF, ECN, Stork, Schelde and Sydkraft co-operate. Hospitals are identified as an attractive initial market for cogeneration direct internal reforming-molten carbonate fuel cell (DIR-MCFC) systems in the size of 400 kWe. Innovative system and stack design concepts are being developed for this application. The `SMARTER' system, based on DIR stacks, combines high electric efficiency and a wide operational window with optimal system simplicity and low cost.

  20. Optimal bit allocation for hybrid scalable/multiple-description video transmission over wireless channels

    NASA Astrophysics Data System (ADS)

    Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.

    2006-01-01

    In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.

  1. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  2. Predicting the Performance of an Axial-Flow Compressor

    NASA Technical Reports Server (NTRS)

    Steinke, R. J.

    1986-01-01

    Stage-stacking computer code (STGSTK) developed for predicting off-design performance of multi-stage axial-flow compressors. Code uses meanline stagestacking method. Stage and cumulative compressor performance calculated from representative meanline velocity diagrams located at rotor inlet and outlet meanline radii. Numerous options available within code. Code developed so user modify correlations to suit their needs.

  3. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop limit is reached, or no further design improvement is possible due to active design variable bounds and/or constraints. The resulting shape parameters are then used by the grid generation code to define a new wing surface and computational grid. The lift-to-drag ratio and its gradient are computed for the new design by the automatically-generated adjoint codes. Several optimization iterations may be required to find an optimum wing shape. Results from two sample cases will be discussed. The reader should note that this work primarily represents a demonstration of use of automatically- generated adjoint code within an aerodynamic shape optimization. As such, little significance is placed upon the actual optimization results, relative to the method for obtaining the results.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clements, Abraham Anthony

    EPOXY is a LLVM base compiler that applies security protections to bare-metal programs on ARM Cortex-M series micro-controllers. This includes privilege overlaying, wherein operations requiring privileged execution are identified and only these operations execute in privileged mode. It also applies code integrity, control-flow hijacking defenses, stack protections, and fine-grained randomization schemes. All of its protections work within the constraints of bare-metal systems.

  5. Design trade-offs among shunt current, pumping loss and compactness in the piping system of a multi-stack vanadium flow battery

    NASA Astrophysics Data System (ADS)

    Ye, Qiang; Hu, Jing; Cheng, Ping; Ma, Zhiqi

    2015-11-01

    Trade-off between shunt current loss and pumping loss is a major challenge in the design of the electrolyte piping network in a flow battery system. It is generally recognized that longer and thinner ducts are beneficial to reduce shunt current but detrimental to minimize pumping power. Base on the developed analog circuit model and the flow network model, we make case studies of multi-stack vanadium flow battery piping systems and demonstrate that both shunt current and electrolyte flow resistance can be simultaneously minimized by using longer and thicker ducts in the piping network. However, extremely long and/or thick ducts lead to a bulky system and may be prohibited by the stack structure. Accordingly, the intrinsic design trade-off is between system efficiency and compactness. Since multi-stack configurations bring both flexibility and complexity to the design process, we perform systematic comparisons among representative piping system designs to illustrate the complicated trade-offs among numerous parameters including stack number, intra-stack channel resistance and inter-stack pipe resistance. As the final design depends on various technical and economical requirements, this paper aims to provide guidelines rather than solutions for designers to locate the optimal trade-off points according to their specific cases.

  6. Energy hyperspace for stacking interaction in AU/AU dinucleotide step: Dispersion-corrected density functional theory study.

    PubMed

    Mukherjee, Sanchita; Kailasam, Senthilkumar; Bansal, Manju; Bhattacharyya, Dhananjay

    2014-01-01

    Double helical structures of DNA and RNA are mostly determined by base pair stacking interactions, which give them the base sequence-directed features, such as small roll values for the purine-pyrimidine steps. Earlier attempts to characterize stacking interactions were mostly restricted to calculations on fiber diffraction geometries or optimized structure using ab initio calculations lacking variation in geometry to comment on rather unusual large roll values observed in AU/AU base pair step in crystal structures of RNA double helices. We have generated stacking energy hyperspace by modeling geometries with variations along the important degrees of freedom, roll, and slide, which were chosen via statistical analysis as maximally sequence dependent. Corresponding energy contours were constructed by several quantum chemical methods including dispersion corrections. This analysis established the most suitable methods for stacked base pair systems despite the limitation imparted by number of atom in a base pair step to employ very high level of theory. All the methods predict negative roll value and near-zero slide to be most favorable for the purine-pyrimidine steps, in agreement with Calladine's steric clash based rule. Successive base pairs in RNA are always linked by sugar-phosphate backbone with C3'-endo sugars and this demands C1'-C1' distance of about 5.4 Å along the chains. Consideration of an energy penalty term for deviation of C1'-C1' distance from the mean value, to the recent DFT-D functionals, specifically ωB97X-D appears to predict reliable energy contour for AU/AU step. Such distance-based penalty improves energy contours for the other purine-pyrimidine sequences also. © 2013 Wiley Periodicals, Inc. Biopolymers 101: 107-120, 2014. Copyright © 2013 Wiley Periodicals, Inc.

  7. Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III

    1996-01-01

    Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.

  8. Method for using global optimization to the estimation of surface-consistent residual statics

    DOEpatents

    Reister, David B.; Barhen, Jacob; Oblow, Edward M.

    2001-01-01

    An efficient method for generating residual statics corrections to compensate for surface-consistent static time shifts in stacked seismic traces. The method includes a step of framing the residual static corrections as a global optimization problem in a parameter space. The method also includes decoupling the global optimization problem involving all seismic traces into several one-dimensional problems. The method further utilizes a Stochastic Pijavskij Tunneling search to eliminate regions in the parameter space where a global minimum is unlikely to exist so that the global minimum may be quickly discovered. The method finds the residual statics corrections by maximizing the total stack power. The stack power is a measure of seismic energy transferred from energy sources to receivers.

  9. Simulation and optimization performance of GaAs/GaAs0.5Sb0.5/GaSb mechanically stacked tandem solar cells

    NASA Astrophysics Data System (ADS)

    Tayubi, Y. R.; Suhandi, A.; Samsudin, A.; Arifin, P.; Supriyatman

    2018-05-01

    Different approaches have been made in order to reach higher solar cells efficiencies. Concepts for multilayer solar cells have been developed. This can be realised if multiple individual single junction solar cells with different suitably chosen band gaps are connected in series in multi-junction solar cells. In our work, we have simulated and optimized solar cells based on the system mechanically stacked using computer simulation and predict their maximum performance. The structures of solar cells are based on the single junction GaAs, GaAs0.5Sb0.5 and GaSb cells. We have simulated each cell individually and extracted their optimal parameters (layer thickness, carrier concentration, the recombination velocity, etc), also, we calculated the efficiency of each cells optimized by separation of the solar spectrum in bands where the cell is sensible for the absorption. The optimal values of conversion efficiency have obtained for the three individual solar cells and the GaAs/GaAs0.5Sb0.5/GaSb tandem solar cells, that are: η = 19,76% for GaAs solar cell, η = 8,42% for GaAs0,5Sb0,5 solar cell, η = 4, 84% for GaSb solar cell and η = 33,02% for GaAs/GaAs0.5Sb0.5/GaSb tandem solar cell.

  10. Theoretical and Monte Carlo optimization of a stacked three-layer flat-panel x-ray imager for applications in multi-spectral diagnostic medical imaging

    NASA Astrophysics Data System (ADS)

    Lopez Maurino, Sebastian; Badano, Aldo; Cunningham, Ian A.; Karim, Karim S.

    2016-03-01

    We propose a new design of a stacked three-layer flat-panel x-ray detector for dual-energy (DE) imaging. Each layer consists of its own scintillator of individual thickness and an underlying thin-film-transistor-based flat-panel. Three images are obtained simultaneously in the detector during the same x-ray exposure, thereby eliminating any motion artifacts. The detector operation is two-fold: a conventional radiography image can be obtained by combining all three layers' images, while a DE subtraction image can be obtained from the front and back layers' images, where the middle layer acts as a mid-filter that helps achieve spectral separation. We proceed to optimize the detector parameters for two sample imaging tasks that could particularly benefit from this new detector by obtaining the best possible signal to noise ratio per root entrance exposure using well-established theoretical models adapted to fit our new design. These results are compared to a conventional DE temporal subtraction detector and a single-shot DE subtraction detector with a copper mid-filter, both of which underwent the same theoretical optimization. The findings are then validated using advanced Monte Carlo simulations for all optimized detector setups. Given the performance expected from initial results and the recent decrease in price for digital x-ray detectors, the simplicity of the three-layer stacked imager approach appears promising to usher in a new generation of multi-spectral digital x-ray diagnostics.

  11. Finite Element Analysis of Film Stack Architecture for Complementary Metal-Oxide-Semiconductor Image Sensors.

    PubMed

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-05-02

    Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components.

  12. Finite Element Analysis of Film Stack Architecture for Complementary Metal-Oxide–Semiconductor Image Sensors

    PubMed Central

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-01-01

    Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components. PMID:28468324

  13. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  14. Optimal aggregation of binary classifiers for multiclass cancer diagnosis using gene expression profiles.

    PubMed

    Yukinawa, Naoto; Oba, Shigeyuki; Kato, Kikuya; Ishii, Shin

    2009-01-01

    Multiclass classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. There have been many studies of aggregating binary classifiers to construct a multiclass classifier based on one-versus-the-rest (1R), one-versus-one (11), or other coding strategies, as well as some comparison studies between them. However, the studies found that the best coding depends on each situation. Therefore, a new problem, which we call the "optimal coding problem," has arisen: how can we determine which coding is the optimal one in each situation? To approach this optimal coding problem, we propose a novel framework for constructing a multiclass classifier, in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. Although there is no a priori answer to the optimal coding problem, our weight tuning method can be a consistent answer to the problem. We apply this method to various classification problems including a synthesized data set and some cancer diagnosis data sets from gene expression profiling. The results demonstrate that, in most situations, our method can improve classification accuracy over simple voting heuristics and is better than or comparable to state-of-the-art multiclass predictors.

  15. Stacked dielectric elastomer actuator (SDEA): casting process, modeling and active vibration isolation

    NASA Astrophysics Data System (ADS)

    Li, Zhuoyuan; Sheng, Meiping; Wang, Minqing; Dong, Pengfei; Li, Bo; Chen, Hualing

    2018-07-01

    In this paper, a novel fabrication process of stacked dielectric elastomer actuator (SDEA) is developed based on casting process and elastomeric electrode. The so-fabricated SDEA benefits the advantages of homogenous and reproducible properties as well as little performance degradation after one-year use. A coupling model of SDEA is established by taking into consideration of the elastomeric electrode and the calculated results agree with the experiments. Based on the model, we attain the method to optimize the SDEA’s parameters. Finally, the SDEA is used as an isolator in active vibration isolation system to verify the feasibility in dynamic application. And the experiment results show a great prospect for SDEA in such application.

  16. Nonsequential modeling of laser diode stacks using Zemax: simulation, optimization, and experimental validation.

    PubMed

    Coluccelli, Nicola

    2010-08-01

    Modeling a real laser diode stack based on Zemax ray tracing software that operates in a nonsequential mode is reported. The implementation of the model is presented together with the geometric and optical parameters to be adjusted to calibrate the model and to match the simulated intensity irradiance profiles with the experimental profiles. The calibration of the model is based on a near-field and a far-field measurement. The validation of the model has been accomplished by comparing the simulated and experimental transverse irradiance profiles at different positions along the caustic formed by a lens. Spot sizes and waist location are predicted with a maximum error below 6%.

  17. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  18. Finite element analysis of multilayer DEAP stack-actuators

    NASA Astrophysics Data System (ADS)

    Kuhring, Stefan; Uhlenbusch, Dominik; Hoffstadt, Thorben; Maas, Jürgen

    2015-04-01

    Dielectric elastomers (DE) are thin polymer films belonging to the class of electroactive polymers (EAP). They are coated with compliant and conductive electrodes on each side, which make them performing a relative high amount of deformation with considerable force generation under the influence of an electric field. Because the realization of high electric fields with a limited voltage level requests single layer polymer films to be very thin, novel multilayer actuators are utilized to increase the absolute displacement and force. In case of a multilayer stack-actuator, many actuator films are mechanically stacked in series and electrically connected in parallel. Because there are different ways to design such a stack-actuator, this contribution considers an optimization of some design parameters using the finite element analysis (FEA), whereby the behavior and the actuation of a multilayer dielectric electroactive polymer (DEAP) stack-actuator can be improved. To describe the material behavior, first different material models are compared and necessary material parameters are identified by experiments. Furthermore, a FEA model of a DEAP film is presented, which is expanded to a multilayer DEAP stack-actuator model. Finally, the results of the FEA are discussed and conclusions for design rules of optimized stack-actuators are outlined.

  19. FET. Exhaust duct and stack. Plan, elevation, foundation, details. Ralph ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    FET. Exhaust duct and stack. Plan, elevation, foundation, details. Ralph M. Parsons 1480-10 ANP/GE-5-716-S-3. Date: February 1959. Approved by INEEL Classification Office for public release. INEEL index code no. 036-0716-00-693-107474 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  20. 40 CFR 75.53 - Monitoring plan.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... are pre-combustion, post-combustion, or integral to the combustion process; control equipment code... fuel flow-to-load test in section 2.1.7 of appendix D to this part is used: (A) The upper and lower... and applied to the hourly flow rate data: (A) Stack or duct width at the test location, ft; (B) Stack...

  1. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  2. Optimized atom position and coefficient coding for matching pursuit-based image compression.

    PubMed

    Shoa, Alireza; Shirani, Shahram

    2009-12-01

    In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.

  3. Towards a supported common NEAMS software stack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cormac Garvey

    2012-04-01

    The NEAMS IPSC's are developing multidimensional, multiphysics, multiscale simulation codes based on first principles that will be capable of predicting all aspects of current and future nuclear reactor systems. These new breeds of simulation codes will include rigorous verification, validation and uncertainty quantification checks to quantify the accuracy and quality of the simulation results. The resulting NEAMS IPSC simulation codes will be an invaluable tool in designing the next generation of Nuclear Reactors and also contribute to a more speedy process in the acquisition of licenses from the NRC for new Reactor designs. Due to the high resolution of themore » models, the complexity of the physics and the added computational resources to quantify the accuracy/quality of the results, the NEAMS IPSC codes will require large HPC resources to carry out the production simulation runs.« less

  4. Atomistic structures of nano-engineered SiC and radiation-induced amorphization resistance

    NASA Astrophysics Data System (ADS)

    Imada, Kenta; Ishimaru, Manabu; Sato, Kazuhisa; Xue, Haizhou; Zhang, Yanwen; Shannon, Steven; Weber, William J.

    2015-10-01

    Nano-engineered 3C-SiC thin films, which possess columnar structures with high-density stacking faults and twins, were irradiated with 2 MeV Si ions at cryogenic and room temperatures. From cross-sectional transmission electron microscopy observations in combination with Monte Carlo simulations based on the Stopping and Range of Ions in Matter code, it was found that their amorphization resistance is six times greater than bulk crystalline SiC at room temperature. High-angle bright-field images taken by spherical aberration corrected scanning transmission electron microscopy revealed that the distortion of atomic configurations is localized near the stacking faults. The resultant strain field probably contributes to the enhancement of radiation tolerance of this material.

  5. Numerical study on AC loss reduction of stacked HTS tapes by optimal design of flux diverter

    NASA Astrophysics Data System (ADS)

    Liu, Guole; Zhang, Guomin; Jing, Liwei; Yu, Hui

    2017-12-01

    High temperature superconducting (HTS) coils are key parts of many AC applications, such as generators, superconducting magnetic energy storage and transformers. AC loss reduction in HTS coils is essential for the commercialization of these HTS devices. Magnetic material is generally used as the flux diverter in an effort to reduce the AC loss in HTS coils. To achieve the greatest reduction in the AC loss of the coils, the flux diverter should be made of a material with low loss and high saturated magnetic density, and the optimization of the geometric size and location of the flux diverter is required. In this paper, we chose Ni-alloy as the flux diverter, which can be processed into a specific shape and size. The influence of the shape and location of the flux diverter on the AC loss characteristics of stacked (RE)BCO tapes is investigated by use of a finite element method. Taking both the AC loss of the (RE)BCO coils and the ferromagnetic loss of the flux diverter into account, the optimal geometry of the flux diverter is obtained. It is found that when the applied current is at half the value of the critical current, the total loss of the HTS stack with the optimal flux diverter is only 18% of the original loss of the HTS stack without the flux diverter. Besides, the effect of the flux diverter on the critical current of the (RE)BCO stack is investigated.

  6. Improved Durability of SOEC Stacks for High Temperature Electrolysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James E. O'Brien; Robert C. O'Brien; Xiaoyu Zhang

    2013-01-01

    High temperature steam electrolysis is a promising technology for efficient and sustainable large-scale hydrogen production. Solid oxide electrolysis cells (SOECs) are able to utilize high temperature heat and electric power from advanced high-temperature nuclear reactors or renewable sources to generate carbon-free hydrogen at large scale. However, long term durability of SOECs needs to be improved significantly before commercialization of this technology can be realized. A degradation rate of 1%/khr or lower is proposed as a threshold value for commercialization of this technology. Solid oxide electrolysis stack tests have been conducted at Idaho National Laboratory to demonstrate recent improvements in long-termmore » durability of SOECs. Electrolyte-supported and electrode-supported SOEC stacks were provided by Ceramatec Inc. and Materials and Systems Research Inc. (MSRI), respectively, for these tests. Long-term durability tests were generally operated for a duration of 1000 hours or more. Stack tests based on technologies developed at Ceramatec and MSRI have shown significant improvement in durability in the electrolysis mode. Long-term degradation rates of 3.2%/khr and 4.6%/khr were observed for MSRI and Ceramatec stacks, espectively. One recent Ceramatec stack even showed negative degradation (performance improvement) over 1900 hours of operation. Optimization of electrode materials, interconnect coatings, and electrolyte-electrode interface microstructures contribute to better durability of SOEC stacks.« less

  7. IET exhaust gas stack. Section, west elevation, foundation plan, access ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    IET exhaust gas stack. Section, west elevation, foundation plan, access ladder, airplane warning light. Ralph M. Parsons 902-5-ANP-712-S 433. Date: May 1954. Approved by INEEL Classification Office for public release. INEEL index code no. 035-0712-60-693-106984 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  8. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    PubMed Central

    Bosse, Stefan

    2015-01-01

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550

  9. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    PubMed

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  10. Cell module and fuel conditioner

    NASA Technical Reports Server (NTRS)

    Hoover, D. Q., Jr.

    1980-01-01

    The computer code for the detailed analytical model of the MK-2 stacks is described. An ERC proprietary matrix is incorporated in the stacks. The mechanical behavior of the stack during thermal cycles under compression was determined. A 5 cell stack of the MK-2 design was fabricated and tested. Designs for the next three stacks were selected and component fabrication initiated. A 3 cell stack which verified the use of wet assembly and a new acid fill procedure were fabricated and tested. Components for the 2 kW test facility were received or fabricated and construction of the facility is underway. The definition of fuel and water is used in a study of the fuel conditioning subsystem. Kinetic data on several catalysts, both crushed and pellets, was obtained in the differential reactor. A preliminary definition of the equipment requirements for treating tap and recovered water was developed.

  11. Deriving field-based species sensitivity distributions (f-SSDs) from stacked species distribution models (S-SDMs).

    PubMed

    Schipper, Aafke M; Posthuma, Leo; de Zwart, Dick; Huijbregts, Mark A J

    2014-12-16

    Quantitative relationships between species richness and single environmental factors, also called species sensitivity distributions (SSDs), are helpful to understand and predict biodiversity patterns, identify environmental management options and set environmental quality standards. However, species richness is typically dependent on a variety of environmental factors, implying that it is not straightforward to quantify SSDs from field monitoring data. Here, we present a novel and flexible approach to solve this, based on the method of stacked species distribution modeling. First, a species distribution model (SDM) is established for each species, describing its probability of occurrence in relation to multiple environmental factors. Next, the predictions of the SDMs are stacked along the gradient of each environmental factor with the remaining environmental factors at fixed levels. By varying those fixed levels, our approach can be used to investigate how field-based SSDs for a given environmental factor change in relation to changing confounding influences, including for example optimal, typical, or extreme environmental conditions. This provides an asset in the evaluation of potential management measures to reach good ecological status.

  12. On-site detection of stacked genetically modified soybean based on event-specific TM-LAMP and a DNAzyme-lateral flow biosensor.

    PubMed

    Cheng, Nan; Shang, Ying; Xu, Yuancong; Zhang, Li; Luo, Yunbo; Huang, Kunlun; Xu, Wentao

    2017-05-15

    Stacked genetically modified organisms (GMO) are becoming popular for their enhanced production efficiency and improved functional properties, and on-site detection of stacked GMO is an urgent challenge to be solved. In this study, we developed a cascade system combining event-specific tag-labeled multiplex LAMP with a DNAzyme-lateral flow biosensor for reliable detection of stacked events (DP305423× GTS 40-3-2). Three primer sets, both event-specific and soybean species-specific, were newly designed for the tag-labeled multiplex LAMP system. A trident-like lateral flow biosensor displayed amplified products simultaneously without cross contamination, and DNAzyme enhancement improved the sensitivity effectively. After optimization, the limit of detection was approximately 0.1% (w/w) for stacked GM soybean, which is sensitive enough to detect genetically modified content up to a threshold value established by several countries for regulatory compliance. The entire detection process could be shortened to 120min without any large-scale instrumentation. This method may be useful for the in-field detection of DP305423× GTS 40-3-2 soybean on a single kernel basis and on-site screening tests of stacked GM soybean lines and individual parent GM soybean lines in highly processed foods. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Comparative analysis for various redox flow batteries chemistries using a cost performance model

    NASA Astrophysics Data System (ADS)

    Crawford, Alasdair; Viswanathan, Vilayanur; Stephenson, David; Wang, Wei; Thomsen, Edwin; Reed, David; Li, Bin; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent

    2015-10-01

    The total energy storage system cost is determined by means of a robust performance-based cost model for multiple flow battery chemistries. Systems aspects such as shunt current losses, pumping losses and various flow patterns through electrodes are accounted for. The system cost minimizing objective function determines stack design by optimizing the state of charge operating range, along with current density and current-normalized flow. The model cost estimates are validated using 2-kW stack performance data for the same size electrodes and operating conditions. Using our validated tool, it has been demonstrated that an optimized all-vanadium system has an estimated system cost of < 350 kWh-1 for 4-h application. With an anticipated decrease in component costs facilitated by economies of scale from larger production volumes, coupled with performance improvements enabled by technology development, the system cost is expected to decrease to 160 kWh-1 for a 4-h application, and to 100 kWh-1 for a 10-h application. This tool has been shared with the redox flow battery community to enable cost estimation using their stack data and guide future direction.

  14. Experimental study of a fuel cell power train for road transport application

    NASA Astrophysics Data System (ADS)

    Corbo, P.; Corcione, F. E.; Migliardini, F.; Veneri, O.

    The development of fuel cell electric vehicles requires the on-board integration of fuel cell systems and electric energy storage devices, with an appropriate energy management system. The optimization of performance and efficiency needs an experimental analysis of the power train, which has to be effected in both stationary and transient conditions (including standard driving cycles). In this paper experimental results concerning the performance of a fuel cell power train are reported and discussed. In particular characterization results for a small sized fuel cell system (FCS), based on a 2.5 kW PEM stack, alone and coupled to an electric propulsion chain of 3.7 kW are presented and discussed. The control unit of the FCS allowed the main stack operative parameters (stoichiometric ratio, hydrogen and air pressure, temperature) to be varied and regulated in order to obtain optimized polarization and efficiency curves. Experimental runs effected on the power train during standard driving cycles have allowed the performance and efficiency of the individual components (fuel cell stack and auxiliaries, dc-dc converter, traction batteries, electric engine) to be evaluated, evidencing the role of output current and voltage of the dc-dc converter in directing the energy flows within the propulsion system.

  15. Heteroaromatic π-Stacking Energy Landscapes

    PubMed Central

    2014-01-01

    In this study we investigate π-stacking interactions of a variety of aromatic heterocycles with benzene using dispersion corrected density functional theory. We calculate extensive potential energy surfaces for parallel-displaced interaction geometries. We find that dispersion contributes significantly to the interaction energy and is complemented by a varying degree of electrostatic interactions. We identify geometric preferences and minimum interaction energies for a set of 13 5- and 6-membered aromatic heterocycles frequently encountered in small drug-like molecules. We demonstrate that the electrostatic properties of these systems are a key determinant for their orientational preferences. The results of this study can be applied in lead optimization for the improvement of stacking interactions, as it provides detailed energy landscapes for a wide range of coplanar heteroaromatic geometries. These energy landscapes can serve as a guide for ring replacement in structure-based drug design. PMID:24773380

  16. Optimizing CyberShake Seismic Hazard Workflows for Large HPC Resources

    NASA Astrophysics Data System (ADS)

    Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.

    2014-12-01

    The CyberShake computational platform is a well-integrated collection of scientific software and middleware that calculates 3D simulation-based probabilistic seismic hazard curves and hazard maps for the Los Angeles region. Currently each CyberShake model comprises about 235 million synthetic seismograms from about 415,000 rupture variations computed at 286 sites. CyberShake integrates large-scale parallel and high-throughput serial seismological research codes into a processing framework in which early stages produce files used as inputs by later stages. Scientific workflow tools are used to manage the jobs, data, and metadata. The Southern California Earthquake Center (SCEC) developed the CyberShake platform using USC High Performance Computing and Communications systems and open-science NSF resources.CyberShake calculations were migrated to the NSF Track 1 system NCSA Blue Waters when it became operational in 2013, via an interdisciplinary team approach including domain scientists, computer scientists, and middleware developers. Due to the excellent performance of Blue Waters and CyberShake software optimizations, we reduced the makespan (a measure of wallclock time-to-solution) of a CyberShake study from 1467 to 342 hours. We will describe the technical enhancements behind this improvement, including judicious introduction of new GPU software, improved scientific software components, increased workflow-based automation, and Blue Waters-specific workflow optimizations.Our CyberShake performance improvements highlight the benefits of scientific workflow tools. The CyberShake workflow software stack includes the Pegasus Workflow Management System (Pegasus-WMS, which includes Condor DAGMan), HTCondor, and Globus GRAM, with Pegasus-mpi-cluster managing the high-throughput tasks on the HPC resources. The workflow tools handle data management, automatically transferring about 13 TB back to SCEC storage.We will present performance metrics from the most recent CyberShake study, executed on Blue Waters. We will compare the performance of CPU and GPU versions of our large-scale parallel wave propagation code, AWP-ODC-SGT. Finally, we will discuss how these enhancements have enabled SCEC to move forward with plans to increase the CyberShake simulation frequency to 1.0 Hz.

  17. Reducing the overlay metrology sensitivity to perturbations of the measurement stack

    NASA Astrophysics Data System (ADS)

    Zhou, Yue; Park, DeNeil; Gutjahr, Karsten; Gottipati, Abhishek; Vuong, Tam; Bae, Sung Yong; Stokes, Nicholas; Jiang, Aiqin; Hsu, Po Ya; O'Mahony, Mark; Donini, Andrea; Visser, Bart; de Ruiter, Chris; Grzela, Grzegorz; van der Laan, Hans; Jak, Martin; Izikson, Pavel; Morgan, Stephen

    2017-03-01

    Overlay metrology setup today faces a continuously changing landscape of process steps. During Diffraction Based Overlay (DBO) metrology setup, many different metrology target designs are evaluated in order to cover the full process window. The standard method for overlay metrology setup consists of single-wafer optimization in which the performance of all available metrology targets is evaluated. Without the availability of external reference data or multiwafer measurements it is hard to predict the metrology accuracy and robustness against process variations which naturally occur from wafer-to-wafer and lot-to-lot. In this paper, the capabilities of the Holistic Metrology Qualification (HMQ) setup flow are outlined, in particular with respect to overlay metrology accuracy and process robustness. The significance of robustness and its impact on overlay measurements is discussed using multiple examples. Measurement differences caused by slight stack variations across the target area, called grating imbalance, are shown to cause significant errors in the overlay calculation in case the recipe and target have not been selected properly. To this point, an overlay sensitivity check on perturbations of the measurement stack is presented for improvement of the overlay metrology setup flow. An extensive analysis on Key Performance Indicators (KPIs) from HMQ recipe optimization is performed on µDBO measurements of product wafers. The key parameters describing the sensitivity to perturbations of the measurement stack are based on an intra-target analysis. Using advanced image analysis, which is only possible for image plane detection of μDBO instead of pupil plane detection of DBO, the process robustness performance of a recipe can be determined. Intra-target analysis can be applied for a wide range of applications, independent of layers and devices.

  18. Artificial Intelligence Techniques for the Berth Allocation and Container Stacking Problems in Container Terminals

    NASA Astrophysics Data System (ADS)

    Salido, Miguel A.; Rodriguez-Molins, Mario; Barber, Federico

    The Container Stacking Problem and the Berth Allocation Problem are two important problems in maritime container terminal's management which are clearly related. Terminal operators normally demand all containers to be loaded into an incoming vessel should be ready and easily accessible in the terminal before vessel's arrival. Similarly, customers (i.e., vessel owners) expect prompt berthing of their vessels upon arrival. In this paper, we present an artificial intelligence based-integrated system to relate these problems. Firstly, we develop a metaheuristic algorithm for berth allocation which generates an optimized order of vessel to be served according to existing berth constraints. Secondly, we develop a domain-oriented heuristic planner for calculating the number of reshuffles needed to allocate containers in the appropriate place for a given berth ordering of vessels. By combining these optimized solutions, terminal operators can be assisted to decide the most appropriated solution in each particular case.

  19. CodonLogo: a sequence logo-based viewer for codon patterns.

    PubMed

    Sharma, Virag; Murphy, David P; Provan, Gregory; Baranov, Pavel V

    2012-07-15

    Conserved patterns across a multiple sequence alignment can be visualized by generating sequence logos. Sequence logos show each column in the alignment as stacks of symbol(s) where the height of a stack is proportional to its informational content, whereas the height of each symbol within the stack is proportional to its frequency in the column. Sequence logos use symbols of either nucleotide or amino acid alphabets. However, certain regulatory signals in messenger RNA (mRNA) act as combinations of codons. Yet no tool is available for visualization of conserved codon patterns. We present the first application which allows visualization of conserved regions in a multiple sequence alignment in the context of codons. CodonLogo is based on WebLogo3 and uses the same heuristics but treats codons as inseparable units of a 64-letter alphabet. CodonLogo can discriminate patterns of codon conservation from patterns of nucleotide conservation that appear indistinguishable in standard sequence logos. The CodonLogo source code and its implementation (in a local version of the Galaxy Browser) are available at http://recode.ucc.ie/CodonLogo and through the Galaxy Tool Shed at http://toolshed.g2.bx.psu.edu/.

  20. A Novel Method of Building Functional Brain Network Using Deep Learning Algorithm with Application in Proficiency Detection.

    PubMed

    Hua, Chengcheng; Wang, Hong; Wang, Hong; Lu, Shaowen; Liu, Chong; Khalid, Syed Madiha

    2018-04-11

    Functional brain network (FBN) has become very popular to analyze the interaction between cortical regions in the last decade. But researchers always spend a long time to search the best way to compute FBN for their specific studies. The purpose of this study is to detect the proficiency of operators during their mineral grinding process controlling based on FBN. To save the search time, a novel semi-data-driven method of computing functional brain connection based on stacked autoencoder (BCSAE) is proposed in this paper. This method uses stacked autoencoder (SAE) to encode the multi-channel EEG data into codes and then computes the dissimilarity between the codes from every pair of electrodes to build FBN. The highlight of this method is that the SAE has a multi-layered structure and is semi-supervised, which means it can dig deeper information and generate better features. Then an experiment was performed, the EEG of the operators were collected while they were operating and analyzed to detect their proficiency. The results show that the BCSAE method generated more number of separable features with less redundancy, and the average accuracy of classification (96.18%) is higher than that of the control methods: PLV (92.19%) and PLI (78.39%).

  1. Evaluation of a CFD Method for Aerodynamic Database Development using the Hyper-X Stack Configuration

    NASA Technical Reports Server (NTRS)

    Parikh, Paresh; Engelund, Walter; Armand, Sasan; Bittner, Robert

    2004-01-01

    A computational fluid dynamic (CFD) study is performed on the Hyper-X (X-43A) Launch Vehicle stack configuration in support of the aerodynamic database generation in the transonic to hypersonic flow regime. The main aim of the study is the evaluation of a CFD method that can be used to support aerodynamic database development for similar future configurations. The CFD method uses the NASA Langley Research Center developed TetrUSS software, which is based on tetrahedral, unstructured grids. The Navier-Stokes computational method is first evaluated against a set of wind tunnel test data to gain confidence in the code s application to hypersonic Mach number flows. The evaluation includes comparison of the longitudinal stability derivatives on the complete stack configuration (which includes the X-43A/Hyper-X Research Vehicle, the launch vehicle and an adapter connecting the two), detailed surface pressure distributions at selected locations on the stack body and component (rudder, elevons) forces and moments. The CFD method is further used to predict the stack aerodynamic performance at flow conditions where no experimental data is available as well as for component loads for mechanical design and aero-elastic analyses. An excellent match between the computed and the test data over a range of flow conditions provides a computational tool that may be used for future similar hypersonic configurations with confidence.

  2. Extending software repository hosting to code review and testing

    NASA Astrophysics Data System (ADS)

    Gonzalez Alvarez, A.; Aparicio Cotarelo, B.; Lossent, A.; Andersen, T.; Trzcinska, A.; Asbury, D.; Hłimyr, N.; Meinhard, H.

    2015-12-01

    We will describe how CERN's services around Issue Tracking and Version Control have evolved, and what the plans for the future are. We will describe the services main design, integration and structure, giving special attention to the new requirements from the community of users in terms of collaboration and integration tools and how we address this challenge when defining new services based on GitLab for collaboration to replace our current Gitolite service and Code Review and Jenkins for Continuous Integration. These new services complement the existing ones to create a new global "development tool stack" where each working group can place its particular development work-flow.

  3. Efficient Network Coding-Based Loss Recovery for Reliable Multicast in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Chi, Kaikai; Jiang, Xiaohong; Ye, Baoliu; Horiguchi, Susumu

    Recently, network coding has been applied to the loss recovery of reliable multicast in wireless networks [19], where multiple lost packets are XOR-ed together as one packet and forwarded via single retransmission, resulting in a significant reduction of bandwidth consumption. In this paper, we first prove that maximizing the number of lost packets for XOR-ing, which is the key part of the available network coding-based reliable multicast schemes, is actually a complex NP-complete problem. To address this limitation, we then propose an efficient heuristic algorithm for finding an approximately optimal solution of this optimization problem. Furthermore, we show that the packet coding principle of maximizing the number of lost packets for XOR-ing sometimes cannot fully exploit the potential coding opportunities, and we then further propose new heuristic-based schemes with a new coding principle. Simulation results demonstrate that the heuristic-based schemes have very low computational complexity and can achieve almost the same transmission efficiency as the current coding-based high-complexity schemes. Furthermore, the heuristic-based schemes with the new coding principle not only have very low complexity, but also slightly outperform the current high-complexity ones.

  4. Can-Evo-Ens: Classifier stacking based evolutionary ensemble system for prediction of human breast cancer using amino acid sequences.

    PubMed

    Ali, Safdar; Majid, Abdul

    2015-04-01

    The diagnostic of human breast cancer is an intricate process and specific indicators may produce negative results. In order to avoid misleading results, accurate and reliable diagnostic system for breast cancer is indispensable. Recently, several interesting machine-learning (ML) approaches are proposed for prediction of breast cancer. To this end, we developed a novel classifier stacking based evolutionary ensemble system "Can-Evo-Ens" for predicting amino acid sequences associated with breast cancer. In this paper, first, we selected four diverse-type of ML algorithms of Naïve Bayes, K-Nearest Neighbor, Support Vector Machines, and Random Forest as base-level classifiers. These classifiers are trained individually in different feature spaces using physicochemical properties of amino acids. In order to exploit the decision spaces, the preliminary predictions of base-level classifiers are stacked. Genetic programming (GP) is then employed to develop a meta-classifier that optimal combine the predictions of the base classifiers. The most suitable threshold value of the best-evolved predictor is computed using Particle Swarm Optimization technique. Our experiments have demonstrated the robustness of Can-Evo-Ens system for independent validation dataset. The proposed system has achieved the highest value of Area Under Curve (AUC) of ROC Curve of 99.95% for cancer prediction. The comparative results revealed that proposed approach is better than individual ML approaches and conventional ensemble approaches of AdaBoostM1, Bagging, GentleBoost, and Random Subspace. It is expected that the proposed novel system would have a major impact on the fields of Biomedical, Genomics, Proteomics, Bioinformatics, and Drug Development. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  6. Capturing RNA Folding Free Energy with Coarse-Grained Molecular Dynamics Simulations

    PubMed Central

    Bell, David R.; Cheng, Sara Y.; Salazar, Heber; Ren, Pengyu

    2017-01-01

    We introduce a coarse-grained RNA model for molecular dynamics simulations, RACER (RnA CoarsE-gRained). RACER achieves accurate native structure prediction for a number of RNAs (average RMSD of 2.93 Å) and the sequence-specific variation of free energy is in excellent agreement with experimentally measured stabilities (R2 = 0.93). Using RACER, we identified hydrogen-bonding (or base pairing), base stacking, and electrostatic interactions as essential driving forces for RNA folding. Also, we found that separating pairing vs. stacking interactions allowed RACER to distinguish folded vs. unfolded states. In RACER, base pairing and stacking interactions each provide an approximate stability of 3–4 kcal/mol for an A-form helix. RACER was developed based on PDB structural statistics and experimental thermodynamic data. In contrast with previous work, RACER implements a novel effective vdW potential energy function, which led us to re-parameterize hydrogen bond and electrostatic potential energy functions. Further, RACER is validated and optimized using a simulated annealing protocol to generate potential energy vs. RMSD landscapes. Finally, RACER is tested using extensive equilibrium pulling simulations (0.86 ms total) on eleven RNA sequences (hairpins and duplexes). PMID:28393861

  7. Three-Dimensional Electron Optics Model Developed for Traveling-Wave Tubes

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2000-01-01

    A three-dimensional traveling-wave tube (TWT) electron beam optics model including periodic permanent magnet (PPM) focusing has been developed at the NASA Glenn Research Center at Lewis Field. This accurate model allows a TWT designer to develop a focusing structure while reducing the expensive and time-consuming task of building the TWT and hot-testing it (with the electron beam). In addition, the model allows, for the first time, an investigation of the effect on TWT operation of the important azimuthally asymmetric features of the focusing stack. The TWT is a vacuum device that amplifies signals by transferring energy from an electron beam to a radiofrequency (RF) signal. A critically important component is the focusing structure, which keeps the electron beam from diverging and intercepting the RF slow wave circuit. Such an interception can result in excessive circuit heating and decreased efficiency, whereas excessive growth in the beam diameter can lead to backward wave oscillations and premature saturation, indicating a serious reduction in tube performance. The most commonly used focusing structure is the PPM stack, which consists of a sequence of cylindrical iron pole pieces and opposite-polarity magnets. Typically, two-dimensional electron optics codes are used in the design of magnetic focusing devices. In general, these codes track the beam from the gun downstream by solving equations of motion for the electron beam in static-electric and magnetic fields in an azimuthally symmetric structure. Because these two-dimensional codes cannot adequately simulate a number of important effects, the simulation code MAFIA (solution of Maxwell's equations by the Finite-Integration-Algorithm) was used at Glenn to develop a three-dimensional electron optics model. First, a PPM stack was modeled in three dimensions. Then, the fields obtained using the magnetostatic solver were loaded into a particle-in-cell solver where the fully three-dimensional behavior of the beam was simulated in the magnetic focusing field. For the first time, the effects of azimuthally asymmetric designs and critical azimuthally asymmetric characteristics of the focusing stack (such as shunts, C-magnets, or magnet misalignment) on electron beam behavior have been investigated. A cutaway portion of a simulated electron beam focused by a PPM stack is illustrated.

  8. Logical qubit fusion

    NASA Astrophysics Data System (ADS)

    Moussa, Jonathan; Ryan-Anderson, Ciaran

    The canonical modern plan for universal quantum computation is a Clifford+T gate set implemented in a topological error-correcting code. This plan has the basic disparity that logical Clifford gates are natural for codes in two spatial dimensions while logical T gates are natural in three. Recent progress has reduced this disparity by proposing logical T gates in two dimensions with doubled, stacked, or gauge color codes, but these proposals lack an error threshold. An alternative universal gate set is Clifford+F, where a fusion (F) gate converts two logical qubits into a logical qudit. We show that logical F gates can be constructed by identifying compatible pairs of qubit and qudit codes that stabilize the same logical subspace, much like the original Bravyi-Kitaev construction of magic state distillation. The simplest example of high-distance compatible codes results in a proposal that is very similar to the stacked color code with the key improvement of retaining an error threshold. Sandia National Labs is a multi-program laboratory managed and operated by Sandia Corp, a wholly owned subsidiary of Lockheed Martin Corp, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  9. Multidisciplinary Aerospace Systems Optimization: Computational AeroSciences (CAS) Project

    NASA Technical Reports Server (NTRS)

    Kodiyalam, S.; Sobieski, Jaroslaw S. (Technical Monitor)

    2001-01-01

    The report describes a method for performing optimization of a system whose analysis is so expensive that it is impractical to let the optimization code invoke it directly because excessive computational cost and elapsed time might result. In such situation it is imperative to have user control the number of times the analysis is invoked. The reported method achieves that by two techniques in the Design of Experiment category: a uniform dispersal of the trial design points over a n-dimensional hypersphere and a response surface fitting, and the technique of krigging. Analyses of all the trial designs whose number may be set by the user are performed before activation of the optimization code and the results are stored as a data base. That code is then executed and referred to the above data base. Two applications, one of the airborne laser system, and one of an aircraft optimization illustrate the method application.

  10. New technologies for advanced three-dimensional optimum shape design in aeronautics

    NASA Astrophysics Data System (ADS)

    Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno

    1999-05-01

    The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright

  11. Atomistic structures of nano-engineered SiC and radiation-induced amorphization resistance

    DOE PAGES

    Imada, Kenta; Ishimaru, Manabu; Sato, Kazuhisa; ...

    2015-06-18

    In this paper, nano-engineered 3C–SiC thin films, which possess columnar structures with high-density stacking faults and twins, were irradiated with 2 MeV Si ions at cryogenic and room temperatures. From cross-sectional transmission electron microscopy observations in combination with Monte Carlo simulations based on the Stopping and Range of Ions in Matter code, it was found that their amorphization resistance is six times greater than bulk crystalline SiC at room temperature. High-angle bright-field images taken by spherical aberration corrected scanning transmission electron microscopy revealed that the distortion of atomic configurations is localized near the stacking faults. Finally, the resultant strain fieldmore » probably contributes to the enhancement of radiation tolerance of this material.« less

  12. Sensitive paper-based analytical device for fast colorimetric detection of nitrite with smartphone.

    PubMed

    Zhang, Xiu-Xiu; Song, Yi-Zhen; Fang, Fang; Wu, Zhi-Yong

    2018-04-01

    On-site rapid monitoring of nitrite as an assessment indicator of the environment, food, and physiological systems has drawn extensive attention. Here, electrokinetic stacking (ES) was combined with colorimetric reaction on a paper-based device (PAD) to achieve colorless nitrite detection with smartphone. In this paper, nitrite was stacked on the paper fluidic channel as a narrow band by electrokinetic stacking. Then, Griess reagent was introduced to visualize the stacking band. Under optimal conditions, the sensitivity of nitrite was 160-fold increased within 5 min. A linear response in the range of 0.075 to 1.0 μg mL -1 (R 2  = 0.99) and a limit of detection (LOD) of 73 ng mL -1 (0.86 μM) were obtained. The LOD was 10 times lower than the reported PAD, and close to that achieved by a desktop spectrophotometer. The applicability was demonstrated by nitrite detection from saliva and water with good selectivity, adding 100 times more concentrated co-ions. High recovery (91.0~108.7%) and reasonable intra-day and inter-day reproducibility (RSD < 9%) were obtained. This work shows that the sensitivity of colorless analyte detection-based colorimetric reaction can be effectively enhanced by integration of ES on a PAD. Graphical abstract Schematic of the experimental setups (left) and the corresponding images (right) of the actual portable device.

  13. [Analysis on Mechanism of Rainout Carried by Wet Stack of Thermal Power Plant].

    PubMed

    Ouyang, Li-hua; Zhuang, Ye; Liu, Ke-wei; Chen, Zhen-yu; Gu, Peng

    2015-06-01

    Rainout from wet-stack took placed in many thermal power plants with WFGD system. Research on causes of the rainout is important to solve the problem. The objective of this research is to analyze the mechanism of rainout. Field study was performed to collect experimental data in one thermal power plant, including the amount of desulfurization slurry carried by wet flue gas, liquor condensate from wet duct, and droplets from the wet stack. Source apportionment analysis was carried out based on physical and chemical data of liquid sample and solid sample. The result showed that mist eliminator operated well, which met the performance guarantee value. But the total amount of desulfurization slurry in flue gas and the sulfate concentration in liquid condensate discharge from the wet duct/stack increased. The liquid condensate accumulated in the wet duct/stack led to liquid re-entrainment. In conclusion, the rainout in this power plant was caused by the short of wet ductwork or liquid discharge system, the droplets caused by re-entrainment carried by the saturated gas released from the stack. The main undissolved components of the rainout were composite carbonate and aluminosilicate. Although ash concentration in this WFGD met the regulation criteria, source apportionment analysis showed that fly ash contributed to rainout was accounted for 60%. This percentage value was same as the data of solid particles in the condensate. It is important to optimize the wet ductwork, wet stack liner, liquid collectors and drainage. Avoiding the accumulation from saturated vapor thermal condensation is an effective way to solve the wet stack rainout.

  14. Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.

    PubMed

    Guo Lu; Xiaoyun Zhang; Li Chen; Zhiyong Gao

    2018-02-01

    Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.

  15. Aeroelastic Tailoring Study of N+2 Low-Boom Supersonic Commercial Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2015-01-01

    The Lockheed Martins N+2 Low-boom Supersonic Commercial Transport (LSCT) aircraft is optimized in this study through the use of a multidisciplinary design optimization tool developed at the NASA Armstrong Flight Research Center. A total of 111 design variables are used in the first optimization run. Total structural weight is the objective function in this optimization run. Design requirements for strength, buckling, and flutter are selected as constraint functions during the first optimization run. The MSC Nastran code is used to obtain the modal, strength, and buckling characteristics. Flutter and trim analyses are based on ZAERO code and landing and ground control loads are computed using an in-house code.

  16. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  17. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  18. Generalized constitutive equations for piezo-actuated compliant mechanism

    NASA Astrophysics Data System (ADS)

    Cao, Junyi; Ling, Mingxiang; Inman, Daniel J.; Lin, Jin

    2016-09-01

    This paper formulates analytical models to describe the static displacement and force interactions between generic serial-parallel compliant mechanisms and their loads by employing the matrix method. In keeping with the familiar piezoelectric constitutive equations, the generalized constitutive equations of compliant mechanism represent the input-output displacement and force relations in the form of a generalized Hooke’s law and as analytical functions of physical parameters. Also significantly, a new model of output displacement for compliant mechanism interacting with piezo-stacks and elastic loads is deduced based on the generalized constitutive equations. Some original findings differing from the well-known constitutive performance of piezo-stacks are also given. The feasibility of the proposed models is confirmed by finite element analysis and by experiments under various elastic loads. The analytical models can be an insightful tool for predicting and optimizing the performance of a wide class of compliant mechanisms that simultaneously consider the influence of loads and piezo-stacks.

  19. Constructing a Pre-Emptive System Based on a Multidimentional Matrix and Autocompletion to Improve Diagnostic Coding in Acute Care Hospitals.

    PubMed

    Noussa-Yao, Joseph; Heudes, Didier; Escudie, Jean-Baptiste; Degoulet, Patrice

    2016-01-01

    Short-stay MSO (Medicine, Surgery, Obstetrics) hospitalization activities in public and private hospitals providing public services are funded through charges for the services provided (T2A in French). Coding must be well matched to the severity of the patient's condition, to ensure that appropriate funding is provided to the hospital. We propose the use of an autocompletion process and multidimensional matrix, to help physicians to improve the expression of information and to optimize clinical coding. With this approach, physicians without knowledge of the encoding rules begin from a rough concept, which is gradually refined through semantic proximity and uses information on the associated codes stemming of optimized knowledge bases of diagnosis code.

  20. Optimizing the resource usage in Cloud based environments: the Synergy approach

    NASA Astrophysics Data System (ADS)

    Zangrando, L.; Llorens, V.; Sgaravatto, M.; Verlato, M.

    2017-10-01

    Managing resource allocation in a cloud based data centre serving multiple virtual organizations is a challenging issue. In fact, while batch systems are able to allocate resources to different user groups according to specific shares imposed by the data centre administrator, without a static partitioning of such resources, this is not so straightforward in the most common cloud frameworks, e.g. OpenStack. In the current OpenStack implementation, it is only possible to grant fixed quotas to the different user groups and these resources cannot be exceeded by one group even if there are unused resources allocated to other groups. Moreover in the existing OpenStack implementation, when there aren’t resources available, new requests are simply rejected: it is then up to the client to later re-issue the request. The recently started EU-funded INDIGO-DataCloud project is addressing this issue through “Synergy”, a new advanced scheduling service targeted for OpenStack. Synergy adopts a fair-share model for resource provisioning which guarantees that resources are distributed among users following the fair-share policies defined by the administrator, taken also into account the past usage of such resources. We present the architecture of Synergy, the status of its implementation, some preliminary results and the foreseen evolution of the service.

  1. Optimization of algorithm of coding of genetic information of Chlamydia

    NASA Astrophysics Data System (ADS)

    Feodorova, Valentina A.; Ulyanov, Sergey S.; Zaytsev, Sergey S.; Saltykov, Yury V.; Ulianova, Onega V.

    2018-04-01

    New method of coding of genetic information using coherent optical fields is developed. Universal technique of transformation of nucleotide sequences of bacterial gene into laser speckle pattern is suggested. Reference speckle patterns of the nucleotide sequences of omp1 gene of typical wild strains of Chlamydia trachomatis of genovars D, E, F, G, J and K and Chlamydia psittaci serovar I as well are generated. Algorithm of coding of gene information into speckle pattern is optimized. Fully developed speckles with Gaussian statistics for gene-based speckles have been used as criterion of optimization.

  2. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  3. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    PubMed

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  4. Using the Eclipse Parallel Tools Platform to Assist Earth Science Model Development and Optimization on High Performance Computers

    NASA Astrophysics Data System (ADS)

    Alameda, J. C.

    2011-12-01

    Development and optimization of computational science models, particularly on high performance computers, and with the advent of ubiquitous multicore processor systems, practically on every system, has been accomplished with basic software tools, typically, command-line based compilers, debuggers, performance tools that have not changed substantially from the days of serial and early vector computers. However, model complexity, including the complexity added by modern message passing libraries such as MPI, and the need for hybrid code models (such as openMP and MPI) to be able to take full advantage of high performance computers with an increasing core count per shared memory node, has made development and optimization of such codes an increasingly arduous task. Additional architectural developments, such as many-core processors, only complicate the situation further. In this paper, we describe how our NSF-funded project, "SI2-SSI: A Productive and Accessible Development Workbench for HPC Applications Using the Eclipse Parallel Tools Platform" (WHPC) seeks to improve the Eclipse Parallel Tools Platform, an environment designed to support scientific code development targeted at a diverse set of high performance computing systems. Our WHPC project to improve Eclipse PTP takes an application-centric view to improve PTP. We are using a set of scientific applications, each with a variety of challenges, and using PTP to drive further improvements to both the scientific application, as well as to understand shortcomings in Eclipse PTP from an application developer perspective, to drive our list of improvements we seek to make. We are also partnering with performance tool providers, to drive higher quality performance tool integration. We have partnered with the Cactus group at Louisiana State University to improve Eclipse's ability to work with computational frameworks and extremely complex build systems, as well as to develop educational materials to incorporate into computational science and engineering codes. Finally, we are partnering with the lead PTP developers at IBM, to ensure we are as effective as possible within the Eclipse community development. We are also conducting training and outreach to our user community, including conference BOF sessions, monthly user calls, and an annual user meeting, so that we can best inform the improvements we make to Eclipse PTP. With these activities we endeavor to encourage use of modern software engineering practices, as enabled through the Eclipse IDE, with computational science and engineering applications. These practices include proper use of source code repositories, tracking and rectifying issues, measuring and monitoring code performance changes against both optimizations as well as ever-changing software stacks and configurations on HPC systems, as well as ultimately encouraging development and maintenance of testing suites -- things that have become commonplace in many software endeavors, but have lagged in the development of science applications. We view that the challenge with the increased complexity of both HPC systems and science applications demands the use of better software engineering methods, preferably enabled by modern tools such as Eclipse PTP, to help the computational science community thrive as we evolve the HPC landscape.

  5. TestSTORM: Simulator for optimizing sample labeling and image acquisition in localization based super-resolution microscopy

    PubMed Central

    Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós

    2014-01-01

    Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813

  6. A thermodynamic approach for selecting operating conditions in the design of reversible solid oxide cell energy systems

    NASA Astrophysics Data System (ADS)

    Wendel, Christopher H.; Kazempoor, Pejman; Braun, Robert J.

    2016-01-01

    Reversible solid oxide cell (ReSOC) systems are being increasingly considered for electrical energy storage, although much work remains before they can be realized, including cell materials development and system design optimization. These systems store electricity by generating a synthetic fuel in electrolysis mode and subsequently recover electricity by electrochemically oxidizing the stored fuel in fuel cell mode. System thermal management is improved by promoting methane synthesis internal to the ReSOC stack. Within this strategy, the cell-stack operating conditions are highly impactful on system performance and optimizing these parameters to suit both operating modes is critical to achieving high roundtrip efficiency. Preliminary analysis shows the thermoneutral voltage to be a useful parameter for analyzing ReSOC systems and the focus of this study is to quantitatively examine how it is affected by ReSOC operating conditions. The results reveal that the thermoneutral voltage is generally reduced by increased pressure, and reductions in temperature, fuel utilization, and hydrogen-to-carbon ratio. Based on the thermodynamic analysis, many different combinations of these operating conditions are expected to promote efficient energy storage. Pressurized systems can achieve high efficiency at higher temperature and fuel utilization, while non-pressurized systems may require lower stack temperature and suffer from reduced energy density.

  7. SU-E-T-512: Electromagnetic Simulations of the Dielectric Wall Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uselmann, A; Mackie, T

    Purpose: To characterize and parametrically study the key components of a dielectric wall accelerator through electromagnetic modeling and particle tracking. Methods: Electromagnetic and particle tracking simulations were performed using a commercial code (CST Microwave Studio, CST Inc.) utilizing the finite integration technique. A dielectric wall accelerator consists of a series of stacked transmission lines sequentially fired in synchrony with an ion pulse. Numerous properties of the stacked transmission lines, including geometric, material, and electronic properties, were analyzed and varied in order to assess their impact on the transverse and axial electric fields. Additionally, stacks of transmission lines were simulated inmore » order to quantify the parasitic effect observed in closely packed lines. Particle tracking simulations using the particle-in-cell method were performed on the various stacks to determine the impact of the above properties on the resultant phase space of the ions. Results: Examination of the simulation results show that novel geometries can shape the accelerating pulse in order to reduce the energy spread and increase the average energy of accelerated ions. Parasitic effects were quantified for various geometries and found to vary with distance from the end of the transmission line and along the beam axis. An optimal arrival time of an ion pulse relative to the triggering of the transmission lines for a given geometry was determined through parametric study. Benchmark simulations of single transmission lines agree well with published experimental results. Conclusion: This work characterized the behavior of the transmission lines used in a dielectric wall accelerator and used this information to improve them in novel ways. Utilizing novel geometries, we were able to improve the accelerating gradient and phase space of the accelerated particle bunch. Through simulation, we were able to discover and optimize design issues with the device at low cost. Funding: Morgridge Institute for Research, Madison WI; Conflict of Interest: Dr. Mackie is an investor and board member at CPAC, a company developing compact accelerator designs similar to those discussed in this work, but designs discussed are not directed by CPAC. Funding: Morgridge Institute for Research, Madison WI; Conflict of Interest: Dr. Mackie is an investor and board member at CPAC, a company developing compact accelerator designs similar to those discussed in this work, but designs discussed are not directed by CPAC.« less

  8. A primitive study of voxel feature generation by multiple stacked denoising autoencoders for detecting cerebral aneurysms on MRA

    NASA Astrophysics Data System (ADS)

    Nemoto, Mitsutaka; Hayashi, Naoto; Hanaoka, Shouhei; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu; Ohtomo, Kuni

    2016-03-01

    The purpose of this study is to evaluate the feasibility of a novel feature generation, which is based on multiple deep neural networks (DNNs) with boosting, for computer-assisted detection (CADe). It is hard and time-consuming to optimize the hyperparameters for DNNs such as stacked denoising autoencoder (SdA). The proposed method allows using SdA based features without the burden of the hyperparameter setting. The proposed method was evaluated by an application for detecting cerebral aneurysms on magnetic resonance angiogram (MRA). A baseline CADe process included four components; scaling, candidate area limitation, candidate detection, and candidate classification. Proposed feature generation method was applied to extract the optimal features for candidate classification. Proposed method only required setting range of the hyperparameters for SdA. The optimal feature set was selected from a large quantity of SdA based features by multiple SdAs, each of which was trained using different hyperparameter set. The feature selection was operated through ada-boost ensemble learning method. Training of the baseline CADe process and proposed feature generation were operated with 200 MRA cases, and the evaluation was performed with 100 MRA cases. Proposed method successfully provided SdA based features just setting the range of some hyperparameters for SdA. The CADe process by using both previous voxel features and SdA based features had the best performance with 0.838 of an area under ROC curve and 0.312 of ANODE score. The results showed that proposed method was effective in the application for detecting cerebral aneurysms on MRA.

  9. Diameter sensors for tree-length harvesting systems

    Treesearch

    T.P. McDonald; Robert B. Rummer; T.E. Grift

    2003-01-01

    Most cut-to-length (CTL) harvesters provide sensors for measuring diameter of trees as they are cut and processed. Among other uses, this capability provides a data collection tool for marketing of logs in real time. Logs can be sorted and stacked based on up-to-date market information, then transportation systems optimized to route wood to proper destinations at...

  10. The role of crossover operator in evolutionary-based approach to the problem of genetic code optimization.

    PubMed

    Błażej, Paweł; Wnȩtrzak, Małgorzata; Mackiewicz, Paweł

    2016-12-01

    One of theories explaining the present structure of canonical genetic code assumes that it was optimized to minimize harmful effects of amino acid replacements resulting from nucleotide substitutions and translational errors. A way to testify this concept is to find the optimal code under given criteria and compare it with the canonical genetic code. Unfortunately, the huge number of possible alternatives makes it impossible to find the optimal code using exhaustive methods in sensible time. Therefore, heuristic methods should be applied to search the space of possible solutions. Evolutionary algorithms (EA) seem to be ones of such promising approaches. This class of methods is founded both on mutation and crossover operators, which are responsible for creating and maintaining the diversity of candidate solutions. These operators possess dissimilar characteristics and consequently play different roles in the process of finding the best solutions under given criteria. Therefore, the effective searching for the potential solutions can be improved by applying both of them, especially when these operators are devised specifically for a given problem. To study this subject, we analyze the effectiveness of algorithms for various combinations of mutation and crossover probabilities under three models of the genetic code assuming different restrictions on its structure. To achieve that, we adapt the position based crossover operator for the most restricted model and develop a new type of crossover operator for the more general models. The applied fitness function describes costs of amino acid replacement regarding their polarity. Our results indicate that the usage of crossover operators can significantly improve the quality of the solutions. Moreover, the simulations with the crossover operator optimize the fitness function in the smaller number of generations than simulations without this operator. The optimal genetic codes without restrictions on their structure minimize the costs about 2.7 times better than the canonical genetic code. Interestingly, the optimal codes are dominated by amino acids characterized by polarity close to its average value for all amino acids. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Utilizing Microelectromechanical Systems (MEMS) Micro-Shutter Designs for Adaptive Coded Aperture Imaging (ACAI) Technologies

    DTIC Science & Technology

    2009-03-01

    52 Figure 4-1: Applied voltage versus deflection curve for Poly1/Poly2 stacked 300-μm single hot-arm actuator (shown on right...58 Figure 4-2: Applied voltage versus deflection curve for Poly1/Poly2 stacked 300-μm double hot-arm actuator (shown on...61 Figure 4-5: Deflection vs. power curves for an individual wedge from

  12. Context-sensitive trace inlining for Java.

    PubMed

    Häubl, Christian; Wimmer, Christian; Mössenböck, Hanspeter

    2013-12-01

    Method inlining is one of the most important optimizations in method-based just-in-time (JIT) compilers. It widens the compilation scope and therefore allows optimizing multiple methods as a whole, which increases the performance. However, if method inlining is used too frequently, the compilation time increases and too much machine code is generated. This has negative effects on the performance. Trace-based JIT compilers only compile frequently executed paths, so-called traces, instead of whole methods. This may result in faster compilation, less generated machine code, and better optimized machine code. In the previous work, we implemented a trace recording infrastructure and a trace-based compiler for [Formula: see text], by modifying the Java HotSpot VM. Based on this work, we evaluate the effect of trace inlining on the performance and the amount of generated machine code. Trace inlining has several major advantages when compared to method inlining. First, trace inlining is more selective than method inlining, because only frequently executed paths are inlined. Second, the recorded traces may capture information about virtual calls, which simplify inlining. A third advantage is that trace information is context sensitive so that different method parts can be inlined depending on the specific call site. These advantages allow more aggressive inlining while the amount of generated machine code is still reasonable. We evaluate several inlining heuristics on the benchmark suites DaCapo 9.12 Bach, SPECjbb2005, and SPECjvm2008 and show that our trace-based compiler achieves an up to 51% higher peak performance than the method-based Java HotSpot client compiler. Furthermore, we show that the large compilation scope of our trace-based compiler has a positive effect on other compiler optimizations such as constant folding or null check elimination.

  13. Optimization of laminated stacking sequence for buckling load maximization by genetic algorithm

    NASA Technical Reports Server (NTRS)

    Le Riche, Rodolphe; Haftka, Raphael T.

    1992-01-01

    The use of a genetic algorithm to optimize the stacking sequence of a composite laminate for buckling load maximization is studied. Various genetic parameters including the population size, the probability of mutation, and the probability of crossover are optimized by numerical experiments. A new genetic operator - permutation - is proposed and shown to be effective in reducing the cost of the genetic search. Results are obtained for a graphite-epoxy plate, first when only the buckling load is considered, and then when constraints on ply contiguity and strain failure are added. The influence on the genetic search of the penalty parameter enforcing the contiguity constraint is studied. The advantage of the genetic algorithm in producing several near-optimal designs is discussed.

  14. Source term estimates of radioxenon released from the BaTek medical isotope production facility using external measured air concentrations.

    PubMed

    Eslinger, Paul W; Cameron, Ian M; Dumais, Johannes Robert; Imardjoko, Yudi; Marsoem, Pujadi; McIntyre, Justin I; Miley, Harry S; Stoehlker, Ulrich; Widodo, Susilo; Woods, Vincent T

    2015-10-01

    BATAN Teknologi (BaTek) operates an isotope production facility in Serpong, Indonesia that supplies (99m)Tc for use in medical procedures. Atmospheric releases of (133)Xe in the production process at BaTek are known to influence the measurements taken at the closest stations of the radionuclide network of the International Monitoring System (IMS). The purpose of the IMS is to detect evidence of nuclear explosions, including atmospheric releases of radionuclides. The major xenon isotopes released from BaTek are also produced in a nuclear explosion, but the isotopic ratios are different. Knowledge of the magnitude of releases from the isotope production facility helps inform analysts trying to decide if a specific measurement result could have originated from a nuclear explosion. A stack monitor deployed at BaTek in 2013 measured releases to the atmosphere for several isotopes. The facility operates on a weekly cycle, and the stack data for June 15-21, 2013 show a release of 1.84 × 10(13) Bq of (133)Xe. Concentrations of (133)Xe in the air are available at the same time from a xenon sampler located 14 km from BaTek. An optimization process using atmospheric transport modeling and the sampler air concentrations produced a release estimate of 1.88 × 10(13) Bq. The same optimization process yielded a release estimate of 1.70 × 10(13) Bq for a different week in 2012. The stack release value and the two optimized estimates are all within 10% of each other. Unpublished production data and the release estimate from June 2013 yield a rough annual release estimate of 8 × 10(14) Bq of (133)Xe in 2014. These multiple lines of evidence cross-validate the stack release estimates and the release estimates based on atmospheric samplers. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Source Term Estimates of Radioxenon Released from the BaTek Medical Isotope Production Facility Using External Measured Air Concentrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Cameron, Ian M.; Dumais, Johannes R.

    2015-10-01

    Abstract Batan Teknologi (BaTek) operates an isotope production facility in Serpong, Indonesia that supplies 99mTc for use in medical procedures. Atmospheric releases of Xe-133 in the production process at BaTek are known to influence the measurements taken at the closest stations of the International Monitoring System (IMS). The purpose of the IMS is to detect evidence of nuclear explosions, including atmospheric releases of radionuclides. The xenon isotopes released from BaTek are the same as those produced in a nuclear explosion, but the isotopic ratios are different. Knowledge of the magnitude of releases from the isotope production facility helps inform analystsmore » trying to decide whether a specific measurement result came from a nuclear explosion. A stack monitor deployed at BaTek in 2013 measured releases to the atmosphere for several isotopes. The facility operates on a weekly cycle, and the stack data for June 15-21, 2013 show a release of 1.84E13 Bq of Xe-133. Concentrations of Xe-133 in the air are available at the same time from a xenon sampler located 14 km from BaTek. An optimization process using atmospheric transport modeling and the sampler air concentrations produced a release estimate of 1.88E13 Bq. The same optimization process yielded a release estimate of 1.70E13 Bq for a different week in 2012. The stack release value and the two optimized estimates are all within 10 percent of each other. Weekly release estimates of 1.8E13 Bq and a 40 percent facility operation rate yields a rough annual release estimate of 3.7E13 Bq of Xe-133. This value is consistent with previously published estimates of annual releases for this facility, which are based on measurements at three IMS stations. These multiple lines of evidence cross-validate the stack release estimates and the release estimates from atmospheric samplers.« less

  16. Nuclear fuel management optimization using genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1995-07-01

    The code independent genetic algorithm reactor optimization (CIGARO) system has been developed to optimize nuclear reactor loading patterns. It uses genetic algorithms (GAs) and a code-independent interface, so any reactor physics code (e.g., CASMO-3/SIMULATE-3) can be used to evaluate the loading patterns. The system is compared to other GA-based loading pattern optimizers. Tests were carried out to maximize the beginning of cycle k{sub eff} for a pressurized water reactor core loading with a penalty function to limit power peaking. The CIGARO system performed well, increasing the k{sub eff} after lowering the peak power. Tests of a prototype parallel evaluation methodmore » showed the potential for a significant speedup.« less

  17. Optimal Near-Hitless Network Failure Recovery Using Diversity Coding

    ERIC Educational Resources Information Center

    Avci, Serhat Nazim

    2013-01-01

    Link failures in wide area networks are common and cause significant data losses. Mesh-based protection schemes offer high capacity efficiency but they are slow, require complex signaling, and instable. Diversity coding is a proactive coding-based recovery technique which offers near-hitless (sub-ms) restoration with a competitive spare capacity…

  18. HBC-Evo: predicting human breast cancer by exploiting amino acid sequence-based feature spaces and evolutionary ensemble system.

    PubMed

    Majid, Abdul; Ali, Safdar

    2015-01-01

    We developed genetic programming (GP)-based evolutionary ensemble system for the early diagnosis, prognosis and prediction of human breast cancer. This system has effectively exploited the diversity in feature and decision spaces. First, individual learners are trained in different feature spaces using physicochemical properties of protein amino acids. Their predictions are then stacked to develop the best solution during GP evolution process. Finally, results for HBC-Evo system are obtained with optimal threshold, which is computed using particle swarm optimization. Our novel approach has demonstrated promising results compared to state of the art approaches.

  19. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  20. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  1. A Homogenization Approach for Design and Simulation of Blast Resistant Composites

    NASA Astrophysics Data System (ADS)

    Sheyka, Michael

    Structural composites have been used in aerospace and structural engineering due to their high strength to weight ratio. Composite laminates have been successfully and extensively used in blast mitigation. This dissertation examines the use of the homogenization approach to design and simulate blast resistant composites. Three case studies are performed to examine the usefulness of different methods that may be used in designing and optimizing composite plates for blast resistance. The first case study utilizes a single degree of freedom system to simulate the blast and a reliability based approach. The first case study examines homogeneous plates and the optimal stacking sequence and plate thicknesses are determined. The second and third case studies use the homogenization method to calculate the properties of composite unit cell made of two different materials. The methods are integrated with dynamic simulation environments and advanced optimization algorithms. The second case study is 2-D and uses an implicit blast simulation, while the third case study is 3-D and simulates blast using the explicit blast method. Both case studies 2 and 3 rely on multi-objective genetic algorithms for the optimization process. Pareto optimal solutions are determined in case studies 2 and 3. Case study 3 is an integrative method for determining optimal stacking sequence, microstructure and plate thicknesses. The validity of the different methods such as homogenization, reliability, explicit blast modeling and multi-objective genetic algorithms are discussed. Possible extension of the methods to include strain rate effects and parallel computation is also examined.

  2. Tunable wavefront coded imaging system based on detachable phase mask: Mathematical analysis, optimization and underlying applications

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Wei, Jingxuan

    2014-09-01

    The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.

  3. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatele, Abhinav

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research alongmore » the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.« less

  4. Internal filament modulation in low-dielectric gap design for built-in selector-less resistive switching memory application

    NASA Astrophysics Data System (ADS)

    Chen, Ying-Chen; Lin, Chih-Yang; Huang, Hui-Chun; Kim, Sungjun; Fowler, Burt; Chang, Yao-Feng; Wu, Xiaohan; Xu, Gaobo; Chang, Ting-Chang; Lee, Jack C.

    2018-02-01

    Sneak path current is a severe hindrance for the application of high-density resistive random-access memory (RRAM) array designs. In this work, we demonstrate nonlinear (NL) resistive switching characteristics of a HfO x /SiO x -based stacking structure as a realization for selector-less RRAM devices. The NL characteristic was obtained and designed by optimizing the internal filament location with a low effective dielectric constant in the HfO x /SiO x structure. The stacking HfO x /SiO x -based RRAM device as the one-resistor-only memory cell is applicable without needing an additional selector device to solve the sneak path issue with a switching voltage of ~1 V, which is desirable for low-power operating in built-in nonlinearity crossbar array configurations.

  5. Methanol clusters (CH3OH)n: putative global minimum-energy structures from model potentials and dispersion-corrected density functional theory.

    PubMed

    Kazachenko, Sergey; Bulusu, Satya; Thakkar, Ajit J

    2013-06-14

    Putative global minima are reported for methanol clusters (CH3OH)n with n ≤ 15. The predictions are based on global optimization of three intermolecular potential energy models followed by local optimization and single-point energy calculations using two variants of dispersion-corrected density functional theory. Recurring structural motifs include folded and/or twisted rings, folded rings with a short branch, and stacked rings. Many of the larger structures are stabilized by weak C-H···O bonds.

  6. Local adjacency metric dimension of sun graph and stacked book graph

    NASA Astrophysics Data System (ADS)

    Yulisda Badri, Alifiah; Darmaji

    2018-03-01

    A graph is a mathematical system consisting of a non-empty set of nodes and a set of empty sides. One of the topics to be studied in graph theory is the metric dimension. Application in the metric dimension is the navigation robot system on a path. Robot moves from one vertex to another vertex in the field by minimizing the errors that occur in translating the instructions (code) obtained from the vertices of that location. To move the robot must give different instructions (code). In order for the robot to move efficiently, the robot must be fast to translate the code of the nodes of the location it passes. so that the location vertex has a minimum distance. However, if the robot must move with the vertex location on a very large field, so the robot can not detect because the distance is too far.[6] In this case, the robot can determine its position by utilizing location vertices based on adjacency. The problem is to find the minimum cardinality of the required location vertex, and where to put, so that the robot can determine its location. The solution to this problem is the dimension of adjacency metric and adjacency metric bases. Rodrguez-Velzquez and Fernau combine the adjacency metric dimensions with local metric dimensions, thus becoming the local adjacency metric dimension. In the local adjacency metric dimension each vertex in the graph may have the same adjacency representation as the terms of the vertices. To obtain the local metric dimension of values in the graph of the Sun and the stacked book graph is used the construction method by considering the representation of each adjacent vertex of the graph.

  7. Performance improvement of optical CDMA networks with stochastic artificial bee colony optimization technique

    NASA Astrophysics Data System (ADS)

    Panda, Satyasen

    2018-05-01

    This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.

  8. Computerized Dental Comparison: A Critical Review of Dental Coding and Ranking Algorithms Used in Victim Identification.

    PubMed

    Adams, Bradley J; Aschheim, Kenneth W

    2016-01-01

    Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.

  9. The use of Graphic User Interface for development of a user-friendly CRS-Stack software

    NASA Astrophysics Data System (ADS)

    Sule, Rachmat; Prayudhatama, Dythia; Perkasa, Muhammad D.; Hendriyana, Andri; Fatkhan; Sardjito; Adriansyah

    2017-04-01

    The development of a user-friendly Common Reflection Surface (CRS) Stack software that has been built by implementing Graphical User Interface (GUI) is described in this paper. The original CRS-Stack software developed by WIT Consortium is compiled in the unix/linux environment, which is not a user-friendly software, so that a user must write the commands and parameters manually in a script file. Due to this limitation, the CRS-Stack become a non popular method, although applying this method is actually a promising way in order to obtain better seismic sections, which have better reflector continuity and S/N ratio. After obtaining successful results that have been tested by using several seismic data belong to oil companies in Indonesia, it comes to an idea to develop a user-friendly software in our own laboratory. Graphical User Interface (GUI) is a type of user interface that allows people to interact with computer programs in a better way. Rather than typing commands and module parameters, GUI allows the users to use computer programs in much simple and easy. Thus, GUI can transform the text-based interface into graphical icons and visual indicators. The use of complicated seismic unix shell script can be avoided. The Java Swing GUI library is used to develop this CRS-Stack GUI. Every shell script that represents each seismic process is invoked from Java environment. Besides developing interactive GUI to perform CRS-Stack processing, this CRS-Stack GUI is design to help geophysicists to manage a project with complex seismic processing procedures. The CRS-Stack GUI software is composed by input directory, operators, and output directory, which are defined as a seismic data processing workflow. The CRS-Stack processing workflow involves four steps; i.e. automatic CMP stack, initial CRS-Stack, optimized CRS-Stack, and CRS-Stack Supergather. Those operations are visualized in an informative flowchart with self explanatory system to guide the user inputting the parameter values for each operation. The knowledge of CRS-Stack processing procedure is still preserved in the software, which is easy and efficient to be learned. The software will still be developed in the future. Any new innovative seismic processing workflow will also be added into this GUI software.

  10. Ferroelectric HfZrOx-based MoS2 negative capacitance transistor with ITO capping layers for steep-slope device application

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Jiang, Shu-Ye; Zhang, Min; Zhu, Hao; Chen, Lin; Sun, Qing-Qing; Zhang, David Wei

    2018-03-01

    A negative capacitance field-effect transistor (NCFET) built with hafnium-based oxide is one of the most promising candidates for low power-density devices due to the extremely steep subthreshold swing (SS) and high on-state current induced by incorporating the ferroelectric material in the gate stack. Here, we demonstrated a two-dimensional (2D) back-gate NCFET with the integration of ferroelectric HfZrOx in the gate stack and few-layer MoS2 as the channel. Instead of using the conventional TiN capping metal to form ferroelectricity in HfZrOx, the NCFET was fabricated on a thickness-optimized Al2O3/indium tin oxide (ITO)/HfZrOx/ITO/SiO2/Si stack, in which the two ITO layers sandwiching the HfZrOx film acted as the control back gate and ferroelectric gate, respectively. The thickness of each layer in the stack was engineered for distinguishable optical identification of the exfoliated 2D flakes on the surface. The NCFET exhibited small off-state current and steep switching behavior with minimum SS as low as 47 mV/dec. Such a steep-slope transistor is compatible with the standard CMOS fabrication process and is very attractive for 2D logic and sensor applications and future energy-efficient nanoelectronic devices with scaling power supply.

  11. Control Code for Bearingless Switched-Reluctance Motor

    NASA Technical Reports Server (NTRS)

    Morrison, Carlos R.

    2007-01-01

    A computer program has been devised for controlling a machine that is an integral combination of magnetic bearings and a switched-reluctance motor. The motor contains an eight-pole stator and a hybrid rotor, which has both (1) a circular lamination stack for levitation and (2) a six-pole lamination stack for rotation. The program computes drive and levitation currents for the stator windings with real-time feedback control. During normal operation, two of the four pairs of opposing stator poles (each pair at right angles to the other pair) levitate the rotor. The remaining two pairs of stator poles exert torque on the six-pole rotor lamination stack to produce rotation. This version is executable in a control-loop time of 40 s on a Pentium (or equivalent) processor that operates at a clock speed of 400 MHz. The program can be expanded, by addition of logic blocks, to enable control of position along additional axes. The code enables adjustment of operational parameters (e.g., motor speed and stiffness, and damping parameters of magnetic bearings) through computer keyboard key presses.

  12. Study on the dielectric properties of Al2O3/TiO2 sub-nanometric laminates: effect of the bottom electrode and the total thickness

    NASA Astrophysics Data System (ADS)

    Ben Elbahri, M.; Kahouli, A.; Mercey, B.; Lebedev, O.; Donner, W.; Lüders, U.

    2018-02-01

    Dielectrics based on amorphous sub-nanometric laminates of TiO2 and Al2O3 are subject to elevated dielectric losses and leakage currents, in large parts due to the extremely thin individual layer thickness chosen for the creation of the Maxwell-Wagner relaxation and therefore the high apparent dielectric constants. The optimization of performances of the laminate itself being strongly limited by this contradiction concerning its internal structure, we will show in this study that modifications of the dielectric stack of capacitors based on these sub-nanometric laminates can positively influence the dielectric losses and the leakage, as for example the nature of the electrodes, the introduction of thick insulating layers at the laminate/electrode interfaces and the modification of the total laminate thickness. The optimization of the dielectric stack leads to the demonstration of a capacitor with an apparent dielectric constant of 90, combined with low dielectric loss (tan δ) of 7 · 10-2 and with leakage currents smaller than 1  ×  10-6 A cm-2 at 10 MV m-1.

  13. Tow-Steered Panels With Holes Subjected to Compression or Shear Loads

    NASA Technical Reports Server (NTRS)

    Jegley, Dawn C.; Tatting, Brian F.; Guerdal, Zafer

    2005-01-01

    Tailoring composite laminates to vary the fiber orientations within a fiber layer of a laminate to address non-uniform stress states and provide structural advantages such as the alteration of principal load paths has potential application to future low-cost, light-weight structures for commercial transport aircraft. Evaluation of this approach requires the determination of the effectiveness of stiffness tailoring through the use of curvilinear fiber paths in flat panels including the reduction of stress concentrations around the holes and the increase in load carrying capability. Panels were designed through the use of an optimization code using a genetic algorithm and fabricated using a tow-steering approach. Manufacturing limitations, such as the radius of curvature of tows the machine could support, avoidance of wrinkling of fibers and minimization of gaps between fibers were considered in the design process. Variable stiffness tow-steered panels constructed with curvilinear fiber paths were fabricated so that the design methodology could be verified through experimentation. Finite element analysis where each element s stacking sequence was accurately defined is used to verify the behavior predicted based on the design code. Experiments on variable stiffness flat panels with central circular holes were conducted with the panels loaded in axial compression or shear. Tape and tow-steered panels are used to demonstrate the buckling, post-buckling and failure behavior of elastically tailored panels. The experimental results presented establish the buckling performance improvements attainable by elastic tailoring of composite laminates.

  14. The Chandra Source Catalog 2.0: Interfaces

    NASA Astrophysics Data System (ADS)

    D'Abrusco, Raffaele; Zografou, Panagoula; Tibbetts, Michael; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Van Stone, David W.

    2018-01-01

    Easy-to-use, powerful public interfaces to access the wealth of information contained in any modern, complex astronomical catalog are fundamental to encourage its usage. In this poster,I present the public interfaces of the second Chandra Source Catalog (CSC2). CSC2 is the most comprehensive catalog of X-ray sources detected by Chandra, thanks to the inclusion of Chandra observations public through the end of 2014 and to methodological advancements. CSC2 provides measured properties for a large number of sources that sample the X-ray sky at fainter levels than the previous versions of the CSC, thanks to the stacking of single overlapping observations within 1’ before source detection. Sources from stacks are then crossmatched, if multiple stacks cover the same area of the sky, to create a list of unique, optimal CSC2 sources. The properties of sources detected in each single stack and each single observation are also measured. The layered structure of the CSC2 catalog is mirrored in the organization of the CSC2 database, consisting of three tables containing all properties for the unique stacked sources (“Master Source”), single stack sources (“Stack Source”) and sources in any single observation (“Observation Source”). These tables contain estimates of the position, flags, extent, significances, fluxes, spectral properties and variability (and associated errors) for all classes of sources. The CSC2 also includes source region and full-field data products for all master sources, stack sources and observation sources: images, photon event lists, light curves and spectra.CSCview, the main interface to the CSC2 source properties and data products, is a GUI tool that allows to build queries based on the values of all properties contained in CSC2 tables, query the catalog, inspect the returned table of source properties, browse and download the associated data products. I will also introduce the suite of command-line interfaces to CSC2 that can be used in alternative to CSCview, and will present the concept for an additional planned cone-search web-based interface.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  15. Accumulative Difference Image Protocol for Particle Tracking in Fluorescence Microscopy Tested in Mouse Lymphonodes

    PubMed Central

    Villa, Carlo E.; Caccia, Michele; Sironi, Laura; D'Alfonso, Laura; Collini, Maddalena; Rivolta, Ilaria; Miserocchi, Giuseppe; Gorletta, Tatiana; Zanoni, Ivan; Granucci, Francesca; Chirico, Giuseppe

    2010-01-01

    The basic research in cell biology and in medical sciences makes large use of imaging tools mainly based on confocal fluorescence and, more recently, on non-linear excitation microscopy. Substantially the aim is the recognition of selected targets in the image and their tracking in time. We have developed a particle tracking algorithm optimized for low signal/noise images with a minimum set of requirements on the target size and with no a priori knowledge of the type of motion. The image segmentation, based on a combination of size sensitive filters, does not rely on edge detection and is tailored for targets acquired at low resolution as in most of the in-vivo studies. The particle tracking is performed by building, from a stack of Accumulative Difference Images, a single 2D image in which the motion of the whole set of the particles is coded in time by a color level. This algorithm, tested here on solid-lipid nanoparticles diffusing within cells and on lymphocytes diffusing in lymphonodes, appears to be particularly useful for the cellular and the in-vivo microscopy image processing in which few a priori assumption on the type, the extent and the variability of particle motions, can be done. PMID:20808918

  16. Accumulative difference image protocol for particle tracking in fluorescence microscopy tested in mouse lymphonodes.

    PubMed

    Villa, Carlo E; Caccia, Michele; Sironi, Laura; D'Alfonso, Laura; Collini, Maddalena; Rivolta, Ilaria; Miserocchi, Giuseppe; Gorletta, Tatiana; Zanoni, Ivan; Granucci, Francesca; Chirico, Giuseppe

    2010-08-17

    The basic research in cell biology and in medical sciences makes large use of imaging tools mainly based on confocal fluorescence and, more recently, on non-linear excitation microscopy. Substantially the aim is the recognition of selected targets in the image and their tracking in time. We have developed a particle tracking algorithm optimized for low signal/noise images with a minimum set of requirements on the target size and with no a priori knowledge of the type of motion. The image segmentation, based on a combination of size sensitive filters, does not rely on edge detection and is tailored for targets acquired at low resolution as in most of the in-vivo studies. The particle tracking is performed by building, from a stack of Accumulative Difference Images, a single 2D image in which the motion of the whole set of the particles is coded in time by a color level. This algorithm, tested here on solid-lipid nanoparticles diffusing within cells and on lymphocytes diffusing in lymphonodes, appears to be particularly useful for the cellular and the in-vivo microscopy image processing in which few a priori assumption on the type, the extent and the variability of particle motions, can be done.

  17. Phenomenology tools on cloud infrastructures using OpenStack

    NASA Astrophysics Data System (ADS)

    Campos, I.; Fernández-del-Castillo, E.; Heinemeyer, S.; Lopez-Garcia, A.; Pahlen, F.; Borges, G.

    2013-04-01

    We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.

  18. WebLogo: A Sequence Logo Generator

    PubMed Central

    Crooks, Gavin E.; Hon, Gary; Chandonia, John-Marc; Brenner, Steven E.

    2004-01-01

    WebLogo generates sequence logos, graphical representations of the patterns within a multiple sequence alignment. Sequence logos provide a richer and more precise description of sequence similarity than consensus sequences and can rapidly reveal significant features of the alignment otherwise difficult to perceive. Each logo consists of stacks of letters, one stack for each position in the sequence. The overall height of each stack indicates the sequence conservation at that position (measured in bits), whereas the height of symbols within the stack reflects the relative frequency of the corresponding amino or nucleic acid at that position. WebLogo has been enhanced recently with additional features and options, to provide a convenient and highly configurable sequence logo generator. A command line interface and the complete, open WebLogo source code are available for local installation and customization. PMID:15173120

  19. Optimizing legacy molecular dynamics software with directive-based offload

    NASA Astrophysics Data System (ADS)

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.

    2015-10-01

    Directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In this paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMPS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel® Xeon Phi™ coprocessors and NVIDIA GPUs. The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS.

  20. Virtual machine provisioning, code management, and data movement design for the Fermilab HEPCloud Facility

    NASA Astrophysics Data System (ADS)

    Timm, S.; Cooper, G.; Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Grassano, D.; Tiradani, A.; Krishnamurthy, R.; Vinayagam, S.; Raicu, I.; Wu, H.; Ren, S.; Noh, S.-Y.

    2017-10-01

    The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores. This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.

  1. Virtual Machine Provisioning, Code Management, and Data Movement Design for the Fermilab HEPCloud Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timm, S.; Cooper, G.; Fuess, S.

    The Fermilab HEPCloud Facility Project has as its goal to extend the current Fermilab facility interface to provide transparent access to disparate resources including commercial and community clouds, grid federations, and HPC centers. This facility enables experiments to perform the full spectrum of computing tasks, including data-intensive simulation and reconstruction. We have evaluated the use of the commercial cloud to provide elasticity to respond to peaks of demand without overprovisioning local resources. Full scale data-intensive workflows have been successfully completed on Amazon Web Services for two High Energy Physics Experiments, CMS and NOνA, at the scale of 58000 simultaneous cores.more » This paper describes the significant improvements that were made to the virtual machine provisioning system, code caching system, and data movement system to accomplish this work. The virtual image provisioning and contextualization service was extended to multiple AWS regions, and to support experiment-specific data configurations. A prototype Decision Engine was written to determine the optimal availability zone and instance type to run on, minimizing cost and job interruptions. We have deployed a scalable on-demand caching service to deliver code and database information to jobs running on the commercial cloud. It uses the frontiersquid server and CERN VM File System (CVMFS) clients on EC2 instances and utilizes various services provided by AWS to build the infrastructure (stack). We discuss the architecture and load testing benchmarks on the squid servers. We also describe various approaches that were evaluated to transport experimental data to and from the cloud, and the optimal solutions that were used for the bulk of the data transport. Finally, we summarize lessons learned from this scale test, and our future plans to expand and improve the Fermilab HEP Cloud Facility.« less

  2. Locality-preserving logical operators in topological stabilizer codes

    NASA Astrophysics Data System (ADS)

    Webster, Paul; Bartlett, Stephen D.

    2018-01-01

    Locality-preserving logical operators in topological codes are naturally fault tolerant, since they preserve the correctability of local errors. Using a correspondence between such operators and gapped domain walls, we describe a procedure for finding all locality-preserving logical operators admitted by a large and important class of topological stabilizer codes. In particular, we focus on those equivalent to a stack of a finite number of surface codes of any spatial dimension, where our procedure fully specifies the group of locality-preserving logical operators. We also present examples of how our procedure applies to codes with different boundary conditions, including color codes and toric codes, as well as more general codes such as Abelian quantum double models and codes with fermionic excitations in more than two dimensions.

  3. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  4. Aeroelastic Tailoring Study of N+2 Low Boom Supersonic Commerical Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2015-01-01

    The Lockheed Martin N+2 Low - boom Supersonic Commercial Transport (LSCT) aircraft was optimized in this study through the use of a multidisciplinary design optimization tool developed at the National Aeronautics and S pace Administration Armstrong Flight Research Center. A total of 111 design variables we re used in the first optimization run. Total structural weight was the objective function in this optimization run. Design requirements for strength, buckling, and flutter we re selected as constraint functions during the first optimization run. The MSC Nastran code was used to obtain the modal, strength, and buckling characteristics. Flutter and trim analyses we re based on ZAERO code, and landing and ground control loads were computed using an in - house code. The w eight penalty to satisfy all the design requirement s during the first optimization run was 31,367 lb, a 9.4% increase from the baseline configuration. The second optimization run was prepared and based on the big-bang big-crunch algorithm. Six composite ply angles for the second and fourth composite layers were selected as discrete design variables for the second optimization run. Composite ply angle changes can't improve the weight configuration of the N+2 LSCT aircraft. However, this second optimization run can create more tolerance for the active and near active strength constraint values for future weight optimization runs.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less

  6. Toward Wireless Health Monitoring via an Analog Signal Compression-Based Biosensing Platform.

    PubMed

    Zhao, Xueyuan; Sadhu, Vidyasagar; Le, Tuan; Pompili, Dario; Javanmard, Mehdi

    2018-06-01

    Wireless all-analog biosensor design for the concurrent microfluidic and physiological signal monitoring is presented in this paper. The key component is an all-analog circuit capable of compressing two analog sources into one analog signal by the analog joint source-channel coding (AJSCC). Two circuit designs are discussed, including the stacked-voltage-controlled voltage source (VCVS) design with the fixed number of levels, and an improved design, which supports a flexible number of AJSCC levels. Experimental results are presented on the wireless biosensor prototype, composed of printed circuit board realizations of the stacked-VCVS design. Furthermore, circuit simulation and wireless link simulation results are presented on the improved design. Results indicate that the proposed wireless biosensor is well suited for sensing two biological signals simultaneously with high accuracy, and can be applied to a wide variety of low-power and low-cost wireless continuous health monitoring applications.

  7. Quantitative analysis of serotonin secreted by human embryonic stem cells-derived serotonergic neurons via pH-mediated online stacking-CE-ESI-MRM.

    PubMed

    Zhong, Xuefei; Hao, Ling; Lu, Jianfeng; Ye, Hui; Zhang, Su-Chun; Li, Lingjun

    2016-04-01

    A CE-ESI-MRM-based assay was developed for targeted analysis of serotonin released by human embryonic stem cells-derived serotonergic neurons in a chemically defined environment. A discontinuous electrolyte system was optimized for pH-mediated online stacking of serotonin. Combining with a liquid-liquid extraction procedure, LOD of serotonin in the Krebs'-Ringer's solution by CE-ESI-MS/MS on a 3D ion trap MS was0.15 ng/mL. The quantitative results confirmed the serotonergic identity of the in vitro developed neurons and the capacity of these neurons to release serotonin in response to stimulus. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Design of Reflective, Photonic Shields for Atmospheric Reentry

    NASA Technical Reports Server (NTRS)

    Komarevskiy, Nikolay; Shklover, Valery; Braginsky, Leonid; Hafner, Christian; Fabrichnaya, Olga; White, Susan; Lawson, John

    2010-01-01

    We present the design of one-dimensional photonic crystal structures, which can be used as omnidirectional reflecting shields against radiative heating of space vehicles entering the Earth's atmosphere. This radiation is approximated by two broad bands centered at visible and near-infrared energies. We applied two approaches to find structures with the best omnidirectional reflecting performance. The first approach is based on a band gap analysis and leads to structures composed of stacked Bragg mirrors. In the second approach, we optimize the structure using an evolutionary strategy. The suggested structures are compared with a simple design of two stacked Bragg mirrors. Choice of the constituent materials for the layers as well as the influence of interlayer diffusion at high temperatures are discussed.

  9. Development of an active isolation mat based on dielectric elastomer stack actuators for mechanical vibration cancellation

    NASA Astrophysics Data System (ADS)

    Karsten, Roman; Flittner, Klaus; Haus, Henry; Schlaak, Helmut F.

    2013-04-01

    This paper describes the development of an active isolation mat for cancelation of vibrations on sensitive devices with a mass of up to 500 gram. Vertical disturbing vibrations are attenuated actively while horizontal vibrations are damped passively. The dimensions of the investigated mat are 140 × 140 × 20 mm. The mat contains 5 dielectric elastomer stack actuators (DESA). The design and the optimization of active isolation mat are realized by ANSYS FEM software. The best performance shows a DESA with air cushion mounted on its circumference. Within the mounting encased air increases static and reduces dynamic stiffness. Experimental results show that vibrations with amplitudes up to 200 μm can be actively eliminated.

  10. Microstrip Antenna for Remote Sensing of Soil Moisture and Sea Surface Salinity

    NASA Technical Reports Server (NTRS)

    Ramhat-Samii, Yahya; Kona, Keerti; Manteghi, Majid; Dinardo, Steven; Hunter, Don; Njoku, Eni; Wilson, Wiliam; Yueh, Simon

    2009-01-01

    This compact, lightweight, dual-frequency antenna feed developed for future soil moisture and sea surface salinity (SSS) missions can benefit future soil and ocean studies by lowering mass, volume, and cost of the antenna system. It also allows for airborne soil moisture and salinity remote sensors operating on small aircraft. While microstrip antenna technology has been developed for radio communications, it has yet to be applied to combined radar and radiometer for Earth remote sensing. The antenna feed provides a key instrument element enabling high-resolution radiometric observations with large, deployable antennas. The design is based on the microstrip stacked-patch array (MSPA) used to feed a large, lightweight, deployable, rotating mesh antenna for spaceborne L-band (approximately equal to 1 GHz) passive and active sensing systems. The array consists of stacked patches to provide dual-frequency capability and suitable radiation patterns. The stacked-patch microstrip element was designed to cover the required L-band center frequencies at 1.26 GHz (lower patch) and 1.413 GHz (upper patch), with dual-linear polarization capabilities. The dimension of patches produces the required frequencies. To achieve excellent polarization isolation and control of antenna sidelobes for the MSPA, the orientation of each stacked-patch element within the array is optimized to reduce the cross-polarization. A specialized feed-distribution network was designed to achieve the required excitation amplitude and phase for each stacked-patch element.

  11. Optimal sensor placement for spatial lattice structure based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Gao, Wei-cheng; Sun, Yi; Xu, Min-jian

    2008-10-01

    Optimal sensor placement technique plays a key role in structural health monitoring of spatial lattice structures. This paper considers the problem of locating sensors on a spatial lattice structure with the aim of maximizing the data information so that structural dynamic behavior can be fully characterized. Based on the criterion of optimal sensor placement for modal test, an improved genetic algorithm is introduced to find the optimal placement of sensors. The modal strain energy (MSE) and the modal assurance criterion (MAC) have been taken as the fitness function, respectively, so that three placement designs were produced. The decimal two-dimension array coding method instead of binary coding method is proposed to code the solution. Forced mutation operator is introduced when the identical genes appear via the crossover procedure. A computational simulation of a 12-bay plain truss model has been implemented to demonstrate the feasibility of the three optimal algorithms above. The obtained optimal sensor placements using the improved genetic algorithm are compared with those gained by exiting genetic algorithm using the binary coding method. Further the comparison criterion based on the mean square error between the finite element method (FEM) mode shapes and the Guyan expansion mode shapes identified by data-driven stochastic subspace identification (SSI-DATA) method are employed to demonstrate the advantage of the different fitness function. The results showed that some innovations in genetic algorithm proposed in this paper can enlarge the genes storage and improve the convergence of the algorithm. More importantly, the three optimal sensor placement methods can all provide the reliable results and identify the vibration characteristics of the 12-bay plain truss model accurately.

  12. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    PubMed

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  13. Dynamic state estimation based on Poisson spike trains—towards a theory of optimal encoding

    NASA Astrophysics Data System (ADS)

    Susemihl, Alex; Meir, Ron; Opper, Manfred

    2013-03-01

    Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and, by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.

  14. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  15. Wing Weight Optimization Under Aeroelastic Loads Subject to Stress Constraints

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Issac, J.; Macmurdy, D.; Guruswamy, Guru P.

    1997-01-01

    A minimum weight optimization of the wing under aeroelastic loads subject to stress constraints is carried out. The loads for the optimization are based on aeroelastic trim. The design variables are the thickness of the wing skins and planform variables. The composite plate structural model incorporates first-order shear deformation theory, the wing deflections are expressed using Chebyshev polynomials and a Rayleigh-Ritz procedure is adopted for the structural formulation. The aerodynamic pressures provided by the aerodynamic code at a discrete number of grid points is represented as a bilinear distribution on the composite plate code to solve for the deflections and stresses in the wing. The lifting-surface aerodynamic code FAST is presently being used to generate the pressure distribution over the wing. The envisioned ENSAERO/Plate is an aeroelastic analysis code which combines ENSAERO version 3.0 (for analysis of wing-body configurations) with the composite plate code.

  16. Model based LV-reconstruction in bi-plane x-ray angiography

    NASA Astrophysics Data System (ADS)

    Backfrieder, Werner; Carpella, Martin; Swoboda, Roland; Steinwender, Clemens; Gabriel, Christian; Leisch, Franz

    2005-04-01

    Interventional x-ray angiography is state of the art in diagnosis and therapy of severe diseases of the cardiovascular system. Diagnosis is based on contrast enhanced dynamic projection images of the left ventricle. A new model based algorithm for three dimensional reconstruction of the left ventricle from bi-planar angiograms was developed. Parametric super ellipses are deformed until their projection profiles optimally fit measured ventricular projections. Deformation is controlled by a simplex optimization procedure. A resulting optimized parameter set builds the initial guess for neighboring slices. A three dimensional surface model of the ventricle is built from stacked contours. The accuracy of the algorithm has been tested with mathematical phantom data and clinical data. Results show conformance with provided projection data and high convergence speed makes the algorithm useful for clinical application. Fully three dimensional reconstruction of the left ventricle has a high potential for improvements of clinical findings in interventional cardiology.

  17. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Qing; Whaley, Richard Clint; Qasem, Apan

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis,more » identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.« less

  18. Weighted stacking of seismic AVO data using hybrid AB semblance and local similarity

    NASA Astrophysics Data System (ADS)

    Deng, Pan; Chen, Yangkang; Zhang, Yu; Zhou, Hua-Wei

    2016-04-01

    The common-midpoint (CMP) stacking technique plays an important role in enhancing the signal-to-noise ratio (SNR) in seismic data processing and imaging. Weighted stacking is often used to improve the performance of conventional equal-weight stacking in further attenuating random noise and handling the amplitude variations in real seismic data. In this study, we propose to use a hybrid framework of combining AB semblance and a local-similarity-weighted stacking scheme. The objective is to achieve an optimal stacking of the CMP gathers with class II amplitude-variation-with-offset (AVO) polarity-reversal anomaly. The selection of high-quality near-offset reference trace is another innovation of this work because of its better preservation of useful energy. Applications to synthetic and field seismic data demonstrate a great improvement using our method to capture the true locations of weak reflections, distinguish thin-bed tuning artifacts, and effectively attenuate random noise.

  19. Optimal lightpath placement on a metropolitan-area network linked with optical CDMA local nets

    NASA Astrophysics Data System (ADS)

    Wang, Yih-Fuh; Huang, Jen-Fa

    2008-01-01

    A flexible optical metropolitan-area network (OMAN) [J.F. Huang, Y.F. Wang, C.Y. Yeh, Optimal configuration of OCDMA-based MAN with multimedia services, in: 23rd Biennial Symposium on Communications, Queen's University, Kingston, Canada, May 29-June 2, 2006, pp. 144-148] structured with OCDMA linkage is proposed to support multimedia services with multi-rate or various qualities of service. To prioritize transmissions in OCDMA, the orthogonal variable spreading factor (OVSF) codes widely used in wireless CDMA are adopted. In addition, for feasible multiplexing, unipolar OCDMA modulation [L. Nguyen, B. Aazhang, J.F. Young, All-optical CDMA with bipolar codes, IEEE Electron. Lett. 31 (6) (1995) 469-470] is used to generate the code selector of multi-rate OMAN, and a flexible fiber-grating-based system is used for the equipment on OCDMA-OVSF code. These enable an OMAN to assign suitable OVSF codes when creating different-rate lightpaths. How to optimally configure a multi-rate OMAN is a challenge because of displaced lightpaths. In this paper, a genetically modified genetic algorithm (GMGA) [L.R. Chen, Flexible fiber Bragg grating encoder/decoder for hybrid wavelength-time optical CDMA, IEEE Photon. Technol. Lett. 13 (11) (2001) 1233-1235] is used to preplan lightpaths in order to optimally configure an OMAN. To evaluate the performance of the GMGA, we compared it with different preplanning optimization algorithms. Simulation results revealed that the GMGA very efficiently solved the problem.

  20. Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.

    PubMed

    Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R

    2006-02-28

    The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.

  1. Nature and magnitude of aromatic base stacking in DNA and RNA: Quantum chemistry, molecular mechanics, and experiment.

    PubMed

    Sponer, Jiří; Sponer, Judit E; Mládek, Arnošt; Jurečka, Petr; Banáš, Pavel; Otyepka, Michal

    2013-12-01

    Base stacking is a major interaction shaping up and stabilizing nucleic acids. During the last decades, base stacking has been extensively studied by experimental and theoretical methods. Advanced quantum-chemical calculations clarified that base stacking is a common interaction, which in the first approximation can be described as combination of the three most basic contributions to molecular interactions, namely, electrostatic interaction, London dispersion attraction and short-range repulsion. There is not any specific π-π energy term associated with the delocalized π electrons of the aromatic rings that cannot be described by the mentioned contributions. The base stacking can be rather reasonably approximated by simple molecular simulation methods based on well-calibrated common force fields although the force fields do not include nonadditivity of stacking, anisotropy of dispersion interactions, and some other effects. However, description of stacking association in condensed phase and understanding of the stacking role in biomolecules remain a difficult problem, as the net base stacking forces always act in a complex and context-specific environment. Moreover, the stacking forces are balanced with many other energy contributions. Differences in definition of stacking in experimental and theoretical studies are explained. Copyright © 2013 Wiley Periodicals, Inc.

  2. Fresnel zone plate stacking in the intermediate field for high efficiency focusing in the hard X-ray regime

    DOE PAGES

    Gleber, Sophie -Charlotte; Wojcik, Michael; Liu, Jie; ...

    2014-11-05

    Focusing efficiency of Fresnel zone plates (FZPs) for X-rays depends on zone height, while the achievable spatial resolution depends on the width of the finest zones. FZPs with optimal efficiency and sub-100-nm spatial resolution require high aspect ratio structures which are difficult to fabricate with current technology especially for the hard X-ray regime. A possible solution is to stack several zone plates. To increase the number of FZPs within one stack, we first demonstrate intermediate-field stacking and apply this method by stacks of up to five FZPs with adjusted diameters. Approaching the respective optimum zone height, we maximized efficiencies formore » high resolution focusing at three different energies, 10, 11.8, and 25 keV.« less

  3. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    PubMed

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  4. Soldier System Power Sources

    DTIC Science & Technology

    2006-12-31

    dependence, and estimated mass of the stack. The model equations were derived from peer reviewed academic journals , internal studies, and texts on the subject...Liu, R. Dougal, E. Solodovnik, "VTB-Based Design of a Standalone Photovoltaic Power System", International Journal of Green Energy, Vol. 1, No. 3...Powered Battery Chargers 17 Exergy minimization 19 Use of secondary cells as temporary energy repositories 19 Design an automatic energy optimization

  5. Multi-board kernel communication using socket programming for embedded applications

    NASA Astrophysics Data System (ADS)

    Mishra, Ashish; Girdhar, Neha; Krishnia, Nikita

    2016-03-01

    It is often seen in large application projects, there is a need to communicate between two different processors or two different kernels. The aim of this paper is to communicate between two different kernels and use efficient method to do so. The TCP/IP protocol is implemented to communicate between two boards via the Ethernet port and use lwIP (lightweight IP) stack, which is a smaller independent implementation of the TCP/IP stack suitable for use in embedded systems. While retaining TCP/IP functionality, lwIP stack reduces the use of memory and even size of the code. In this process of communication we made Raspberry pi as an active client and Field programmable gate array(FPGA) board as a passive server and they are allowed to communicate via Ethernet. Three applications based on TCP/IP client-server network communication have been implemented. The Echo server application is used to communicate between two different kernels of two different boards. Socket programming is used as it is independent of platform and programming language used. TCP transmit and receive throughput test applications are used to measure maximum throughput of the transmission of data. These applications are based on communication to an open source tool called iperf. It is used to measure the throughput transmission rate by sending or receiving some constant piece of data to the client or server according to the test application.

  6. Guanine base stacking in G-quadruplex nucleic acids

    PubMed Central

    Lech, Christopher Jacques; Heddi, Brahim; Phan, Anh Tuân

    2013-01-01

    G-quadruplexes constitute a class of nucleic acid structures defined by stacked guanine tetrads (or G-tetrads) with guanine bases from neighboring tetrads stacking with one another within the G-tetrad core. Individual G-quadruplexes can also stack with one another at their G-tetrad interface leading to higher-order structures as observed in telomeric repeat-containing DNA and RNA. In this study, we investigate how guanine base stacking influences the stability of G-quadruplexes and their stacked higher-order structures. A structural survey of the Protein Data Bank is conducted to characterize experimentally observed guanine base stacking geometries within the core of G-quadruplexes and at the interface between stacked G-quadruplex structures. We couple this survey with a systematic computational examination of stacked G-tetrad energy landscapes using quantum mechanical computations. Energy calculations of stacked G-tetrads reveal large energy differences of up to 12 kcal/mol between experimentally observed geometries at the interface of stacked G-quadruplexes. Energy landscapes are also computed using an AMBER molecular mechanics description of stacking energy and are shown to agree quite well with quantum mechanical calculated landscapes. Molecular dynamics simulations provide a structural explanation for the experimentally observed preference of parallel G-quadruplexes to stack in a 5′–5′ manner based on different accessible tetrad stacking modes at the stacking interfaces of 5′–5′ and 3′–3′ stacked G-quadruplexes. PMID:23268444

  7. Supra-Nanoparticle Functional Assemblies through Programmable Stacking

    DOE PAGES

    Tian, Cheng; Cordeiro, Marco Aurelio L.; Lhermitte, Julien; ...

    2017-05-25

    The quest for the by-design assembly of material and devices from nanoscale inorganic components is well recognized. Conventional self-assembly is often limited in its ability to control material morphology and structure simultaneously. We report a general method of assembling nanoparticles in a linear “pillar” morphology with regulated internal configurations. Our approach is inspired by supramolecular systems, where intermolecular stacking guides the assembly process to form diverse linear morphologies. Programmable stacking interactions were realized through incorporation of DNA coded recognition between the designed planar nanoparticle clusters. This resulted in the formation of multilayered pillar architectures with a well-defined internal nanoparticle organization.more » Furthermore, by controlling the number, position, size, and composition of the nanoparticles in each layer, a broad range of nanoparticle pillars were assembled and characterized in detail. In addition, we demonstrated the utility of this stacking assembly strategy for investigating plasmonic and electrical transport properties.« less

  8. Supra-Nanoparticle Functional Assemblies through Programmable Stacking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Cheng; Cordeiro, Marco Aurelio L.; Lhermitte, Julien

    The quest for the by-design assembly of material and devices from nanoscale inorganic components is well recognized. Conventional self-assembly is often limited in its ability to control material morphology and structure simultaneously. We report a general method of assembling nanoparticles in a linear “pillar” morphology with regulated internal configurations. Our approach is inspired by supramolecular systems, where intermolecular stacking guides the assembly process to form diverse linear morphologies. Programmable stacking interactions were realized through incorporation of DNA coded recognition between the designed planar nanoparticle clusters. This resulted in the formation of multilayered pillar architectures with a well-defined internal nanoparticle organization.more » Furthermore, by controlling the number, position, size, and composition of the nanoparticles in each layer, a broad range of nanoparticle pillars were assembled and characterized in detail. In addition, we demonstrated the utility of this stacking assembly strategy for investigating plasmonic and electrical transport properties.« less

  9. Supra-Nanoparticle Functional Assemblies through Programmable Stacking.

    PubMed

    Tian, Cheng; Cordeiro, Marco Aurelio L; Lhermitte, Julien; Xin, Huolin L; Shani, Lior; Liu, Mingzhao; Ma, Chunli; Yeshurun, Yosef; DiMarzio, Donald; Gang, Oleg

    2017-07-25

    The quest for the by-design assembly of material and devices from nanoscale inorganic components is well recognized. Conventional self-assembly is often limited in its ability to control material morphology and structure simultaneously. Here, we report a general method of assembling nanoparticles in a linear "pillar" morphology with regulated internal configurations. Our approach is inspired by supramolecular systems, where intermolecular stacking guides the assembly process to form diverse linear morphologies. Programmable stacking interactions were realized through incorporation of DNA coded recognition between the designed planar nanoparticle clusters. This resulted in the formation of multilayered pillar architectures with a well-defined internal nanoparticle organization. By controlling the number, position, size, and composition of the nanoparticles in each layer, a broad range of nanoparticle pillars were assembled and characterized in detail. In addition, we demonstrated the utility of this stacking assembly strategy for investigating plasmonic and electrical transport properties.

  10. NSDann2BS, a neutron spectrum unfolding code based on neural networks technology and two bonner spheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.

    In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called ''Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres'', (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the ''Robust design of artificial neural networks methodology'' and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored atmore » synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of {sup 252}Cf, {sup 241}AmBe and {sup 239}PuBe neutron sources measured with a Bonner spheres system.« less

  11. NSDann2BS, a neutron spectrum unfolding code based on neural networks technology and two bonner spheres

    NASA Astrophysics Data System (ADS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called "Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres", (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the "Robust design of artificial neural networks methodology" and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored at synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of 252Cf, 241AmBe and 239PuBe neutron sources measured with a Bonner spheres system.

  12. Optimal design and operation of solid oxide fuel cell systems for small-scale stationary applications

    NASA Astrophysics Data System (ADS)

    Braun, Robert Joseph

    The advent of maturing fuel cell technologies presents an opportunity to achieve significant improvements in energy conversion efficiencies at many scales; thereby, simultaneously extending our finite resources and reducing "harmful" energy-related emissions to levels well below that of near-future regulatory standards. However, before realization of the advantages of fuel cells can take place, systems-level design issues regarding their application must be addressed. Using modeling and simulation, the present work offers optimal system design and operation strategies for stationary solid oxide fuel cell systems applied to single-family detached dwellings. A one-dimensional, steady-state finite-difference model of a solid oxide fuel cell (SOFC) is generated and verified against other mathematical SOFC models in the literature. Fuel cell system balance-of-plant components and costs are also modeled and used to provide an estimate of system capital and life cycle costs. The models are used to evaluate optimal cell-stack power output, the impact of cell operating and design parameters, fuel type, thermal energy recovery, system process design, and operating strategy on overall system energetic and economic performance. Optimal cell design voltage, fuel utilization, and operating temperature parameters are found using minimization of the life cycle costs. System design evaluations reveal that hydrogen-fueled SOFC systems demonstrate lower system efficiencies than methane-fueled systems. The use of recycled cell exhaust gases in process design in the stack periphery are found to produce the highest system electric and cogeneration efficiencies while achieving the lowest capital costs. Annual simulations reveal that efficiencies of 45% electric (LHV basis), 85% cogenerative, and simple economic paybacks of 5--8 years are feasible for 1--2 kW SOFC systems in residential-scale applications. Design guidelines that offer additional suggestions related to fuel cell-stack sizing and operating strategy (base-load or load-following and cogeneration or electric-only) are also presented.

  13. Dynamic modeling, experimental evaluation, optimal design and control of integrated fuel cell system and hybrid energy systems for building demands

    NASA Astrophysics Data System (ADS)

    Nguyen, Gia Luong Huu

    Fuel cells can produce electricity with high efficiency, low pollutants, and low noise. With the advent of fuel cell technologies, fuel cell systems have since been demonstrated as reliable power generators with power outputs from a few watts to a few megawatts. With proper equipment, fuel cell systems can produce heating and cooling, thus increased its overall efficiency. To increase the acceptance from electrical utilities and building owners, fuel cell systems must operate more dynamically and integrate well with renewable energy resources. This research studies the dynamic performance of fuel cells and the integration of fuel cells with other equipment in three levels: (i) the fuel cell stack operating on hydrogen and reformate gases, (ii) the fuel cell system consisting of a fuel reformer, a fuel cell stack, and a heat recovery unit, and (iii) the hybrid energy system consisting of photovoltaic panels, fuel cell system, and energy storage. In the first part, this research studied the steady-state and dynamic performance of a high temperature PEM fuel cell stack. Collaborators at Aalborg University (Aalborg, Denmark) conducted experiments on a high temperature PEM fuel cell short stack at steady-state and transients. Along with the experimental activities, this research developed a first-principles dynamic model of a fuel cell stack. The dynamic model developed in this research was compared to the experimental results when operating on different reformate concentrations. Finally, the dynamic performance of the fuel cell stack for a rapid increase and rapid decrease in power was evaluated. The dynamic model well predicted the performance of the well-performing cells in the experimental fuel cell stack. The second part of the research studied the dynamic response of a high temperature PEM fuel cell system consisting of a fuel reformer, a fuel cell stack, and a heat recovery unit with high thermal integration. After verifying the model performance with the obtained experimental data, the research studied the control of airflow to regulate the temperature of reactors within the fuel processor. The dynamic model provided a platform to test the dynamic response for different control gains. With sufficient sensing and appropriate control, a rapid response to maintain the temperature of the reactor despite an increase in power was possible. The third part of the research studied the use of a fuel cell in conjunction with photovoltaic panels, and energy storage to provide electricity for buildings. This research developed an optimization framework to determine the size of each device in the hybrid energy system to satisfy the electrical demands of buildings and yield the lowest cost. The advantage of having the fuel cell with photovoltaic and energy storage was the ability to operate the fuel cell at baseload at night, thus reducing the need for large battery systems to shift the solar power produced in the day to the night. In addition, the dispatchability of the fuel cell provided an extra degree of freedom necessary for unforeseen disturbances. An operation framework based on model predictive control showed that the method is suitable for optimizing the dispatch of the hybrid energy system.

  14. Time domain topology optimization of 3D nanophotonic devices

    NASA Astrophysics Data System (ADS)

    Elesin, Y.; Lazarov, B. S.; Jensen, J. S.; Sigmund, O.

    2014-02-01

    We present an efficient parallel topology optimization framework for design of large scale 3D nanophotonic devices. The code shows excellent scalability and is demonstrated for optimization of broadband frequency splitter, waveguide intersection, photonic crystal-based waveguide and nanowire-based waveguide. The obtained results are compared to simplified 2D studies and we demonstrate that 3D topology optimization may lead to significant performance improvements.

  15. Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU

    NASA Astrophysics Data System (ADS)

    Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.

    1982-06-01

    In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.

  16. Solid State Energy Conversion Energy Alliance (SECA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hennessy, Daniel; Sibisan, Rodica; Rasmussen, Mike

    2011-09-12

    The overall objective is to develop a Solid Oxide Fuel Cell (SOFC) stack that can be economically produced in high volumes and mass customized for different applications in transportation, stationary power generation, and military market sectors. In Phase I, work will be conducted on system design and integration, stack development, and development of reformers for natural gas and gasoline. Specifically, Delphi-Battelle will fabricate and test a 5 kW stationary power generation system consisting of a SOFC stack, a steam reformer for natural gas, and balance-of-plant (BOP) components, having an expected efficiency of ≥ 35 percent (AC/LHV). In Phase II andmore » Phase III, the emphasis will be to improve the SOFC stack, reduce start-up time, improve thermal cyclability, demonstrate operation on diesel fuel, and substantially reduce materials and manufacturing cost by integrating several functions into one component and thus reducing the number of components in the system. In Phase II, Delphi-Battelle will fabricate and demonstrate two SOFC systems: an improved stationary power generation system consisting of an improved SOFC stack with integrated reformation of natural gas, and the BOP components, with an expected efficiency of ≥ 40 percent (AC/LHV), and a mobile 5 kW system for heavy-duty trucks and military power applications consisting of an SOFC stack, reformer utilizing anode tailgate recycle for diesel fuel, and BOP components, with an expected efficiency of ≥ 30 percent (DC/LHV). Finally, in Phase III, Delphi-Battelle will fabricate and test a 5 kW Auxiliary Power Unit (APU) for mass-market automotive application consisting of an optimized SOFC stack, an optimized catalytic partial oxidation (CPO) reformer for gasoline, and BOP components, having an expected efficiency of ≥ 30 percent (DC/LHV) and a factory cost of ≤ $400/kW.« less

  17. Solid State Energy Conversion Energy Alliance (SECA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hennessy, Daniel; Sibisan, Rodica; Rasmussen, Mike

    2011-09-12

    The overall objective is to develop a solid oxide fuel cell (SOFC) stack that can be economically produced in high volumes and mass customized for different applications in transportation, stationary power generation, and military market sectors. In Phase I, work will be conducted on system design and integration, stack development, and development of reformers for natural gas and gasoline. Specifically, Delphi-Battelle will fabricate and test a 5 kW stationary power generation system consisting of a SOFC stack, a steam reformer for natural gas, and balance-of-plant (BOP) components, having an expected efficiency of 35 percent (AC/LHV). In Phase II and Phasemore » III, the emphasis will be to improve the SOFC stack, reduce start-up time, improve thermal cyclability, demonstrate operation on diesel fuel, and substantially reduce materials and manufacturing cost by integrating several functions into one component and thus reducing the number of components in the system. In Phase II, Delphi-Battelle will fabricate and demonstrate two SOFC systems: an improved stationary power generation system consisting of an improved SOFC stack with integrated reformation of natural gas, and the BOP components, with an expected efficiency of ≥40 percent (AC/LHV), and a mobile 5 kW system for heavy-duty trucks and military power applications consisting of an SOFC stack, reformer utilizing anode tailgate recycle for diesel fuel, and BOP components, with an expected efficiency of ≥30 percent (DC/LHV). Finally, in Phase III, Delphi-Battelle will fabricate and test a 5 kW Auxiliary Power Unit (APU) for mass-market automotive application consisting of an optimized SOFC stack, an optimized catalytic partial oxidation (CPO) reformer for gasoline, and BOP components, having an expected efficiency of 30 percent (DC/LHV) and a factory cost of ≤$400/kW.« less

  18. Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality.

    PubMed

    Li, Zhongyu; Butler, Erik; Li, Kang; Lu, Aidong; Ji, Shuiwang; Zhang, Shaoting

    2018-02-12

    Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.

  19. Analysis of pesticides in soy milk combining solid-phase extraction and capillary electrophoresis-mass spectrometry.

    PubMed

    Hernández-Borges, Javier; Rodriguez-Delgado, Miguel Angel; García-Montelongo, Francisco J; Cifuentes, Alejandro

    2005-06-01

    In this work, the determination of a group of triazolopyrimidine sulfoanilide herbicides (cloransulam-methyl, metosulam, flumetsulam, florasulam, and diclosulam) in soy milk by capillary electrophoresis-mass spectrometry (CE-MS) is presented. The main electrospray interface (ESI) parameters (nebulizer pressure, dry gas flow rate, dry gas temperature, and composition of the sheath liquid) are optimized using a central composite design. To increase the sensitivity of the CE-MS method, an off-line sample preconcentration procedure based on solid-phase extraction (SPE) is combined with an on-line stacking procedure (i.e. normal stacking mode, NSM). Samples could be injected for up to 100 s, providing limits of detection (LODs) down to 74 microg/L, i.e., at the low ppb level, with relative standard deviation values (RSD,%) between 3.8% and 6.4% for peak areas on the same day, and between 6.5% and 8.1% on three different days. The usefulness of the optimized SPE-NSM-CE-MS procedure is demonstrated through the sensitive quantification of the selected pesticides in soy milk samples.

  20. Multimode resistive switching in nanoscale hafnium oxide stack as studied by atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Y., E-mail: houyi@pku.edu.cn, E-mail: lfliu@pku.edu.cn; IMEC, Kapeldreef 75, B-3001 Heverlee; Department of Physics and Astronomy, KU Leuven, Celestijnenlaan 200D, B-3001 Heverlee

    2016-07-11

    The nanoscale resistive switching in hafnium oxide stack is investigated by the conductive atomic force microscopy (C-AFM). The initial oxide stack is insulating and electrical stress from the C-AFM tip induces nanometric conductive filaments. Multimode resistive switching can be observed in consecutive operation cycles at one spot. The different modes are interpreted in the framework of a low defect quantum point contact theory. The model implies that the optimization of the conductive filament active region is crucial for the future application of nanoscale resistive switching devices.

  1. Optimization technique of wavefront coding system based on ZEMAX externally compiled programs

    NASA Astrophysics Data System (ADS)

    Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2016-10-01

    Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.

  2. Optimizing legacy molecular dynamics software with directive-based offload

    DOE PAGES

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; ...

    2015-05-14

    The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also resultmore » in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.« less

  3. The in-capillary DPPH-capillary electrophoresis-the diode array detector combined with reversed-electrode polarity stacking mode for screening and quantifying major antioxidants in Cuscuta chinensis Lam.

    PubMed

    Liu, Jiao; Tian, Ji; Li, Jin; Azietaku, John Teye; Zhang, Bo-Li; Gao, Xiu-Mei; Chang, Yan-Xu

    2016-07-01

    An in-capillary 2, 2-diphenyl-1-picrylhydrazyl (DPPH)-CE-the DAD (in-capillary DPPH-CE-DAD) combined with reversed-electrode polarity stacking mode has been developed to screen and quantify the active antioxidant components of Cuscuta chinensis Lam. The operation parameters were optimized with regard to the pH and concentration of buffer solution, SDS, β-CDs, organic modifier, as well as separation voltage and temperature. Six antioxidants including chlorogenic acid, p-coumaric acid, rutin, hyperin, isoquercitrin, and astragalin were screened and the total antioxidant activity of the complex matrix was successfully evaluated based on the decreased peak area of DPPH by the established DPPH-CE-DAD method. Sensitivity was enhanced under reversed-electrode polarity stacking mode and 10- to 31-fold of magnitude improvement in detection sensitivity for each analyte was attained. The results demonstrated that the newly established in-capillary DPPH-CE-DAD method combined with reversed-electrode polarity stacking mode could integrate sample concentration, the oxidizing reaction, separation, and detection into one capillary to fully automate the system. It was considered a suitable technique for the separation, screening, and determination of trace antioxidants in natural products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Fuel-cell based power generating system having power conditioning apparatus

    DOEpatents

    Mazumder, Sudip K.; Pradhan, Sanjaya K.

    2010-10-05

    A power conditioner includes power converters for supplying power to a load, a set of selection switches corresponding to the power converters for selectively connecting the fuel-cell stack to the power converters, and another set of selection switches corresponding to the power converters for selectively connecting the battery to the power converters. The power conveners output combined power that substantially optimally meets a present demand of the load.

  5. Electrode channel selection based on backtracking search optimization in motor imagery brain-computer interfaces.

    PubMed

    Dai, Shengfa; Wei, Qingguo

    2017-01-01

    Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.

  6. Optimization of lattice surgery is NP-hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon J.

    2017-09-01

    The traditional method for computation in either the surface code or in the Raussendorf model is the creation of holes or "defects" within the encoded lattice of qubits that are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work, we focus on the lattice surgery representation, which realizes transversal logic operations without destroying the intrinsic 2D nearest-neighbor properties of the braid-based surface code and achieves universality without defects and braid-based logic. For both techniques there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult and the classical complexity associated with this problem has yet to be determined. In the context of lattice-surgery-based logic, we can introduce an optimality condition, which corresponds to a circuit with the lowest resource requirements in terms of physical qubits and computational time, and prove that the complexity of optimizing a quantum circuit in the lattice surgery model is NP-hard.

  7. High-Performance I/O: HDF5 for Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav

    2015-01-01

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less

  8. High-Performance I/O: HDF5 for Lattice QCD

    DOE PAGES

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav; ...

    2017-05-09

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less

  9. New t-gap insertion-deletion-like metrics for DNA hybridization thermodynamic modeling.

    PubMed

    D'yachkov, Arkadii G; Macula, Anthony J; Pogozelski, Wendy K; Renz, Thomas E; Rykov, Vyacheslav V; Torney, David C

    2006-05-01

    We discuss the concept of t-gap block isomorphic subsequences and use it to describe new abstract string metrics that are similar to the Levenshtein insertion-deletion metric. Some of the metrics that we define can be used to model a thermodynamic distance function on single-stranded DNA sequences. Our model captures a key aspect of the nearest neighbor thermodynamic model for hybridized DNA duplexes. One version of our metric gives the maximum number of stacked pairs of hydrogen bonded nucleotide base pairs that can be present in any secondary structure in a hybridized DNA duplex without pseudoknots. Thermodynamic distance functions are important components in the construction of DNA codes, and DNA codes are important components in biomolecular computing, nanotechnology, and other biotechnical applications that employ DNA hybridization assays. We show how our new distances can be calculated by using a dynamic programming method, and we derive a Varshamov-Gilbert-like lower bound on the size of some of codes using these distance functions as constraints. We also discuss software implementation of our DNA code design methods.

  10. Revealing the preferred interlayer orientations and stackings of two-dimensional bilayer gallium selenide crystals.

    PubMed

    Li, Xufan; Basile, Leonardo; Yoon, Mina; Ma, Cheng; Puretzky, Alexander A; Lee, Jaekwang; Idrobo, Juan C; Chi, Miaofang; Rouleau, Christopher M; Geohegan, David B; Xiao, Kai

    2015-02-23

    Characterizing and controlling the interlayer orientations and stacking orders of two-dimensional (2D) bilayer crystals and van der Waals (vdW) heterostructures is crucial to optimize their electrical and optoelectronic properties. The four polymorphs of layered gallium selenide (GaSe) crystals that result from different layer stackings provide an ideal platform to study the stacking configurations in 2D bilayer crystals. Through a controllable vapor-phase deposition method, bilayer GaSe crystals were selectively grown and their two preferred 0° or 60° interlayer rotations were investigated. The commensurate stacking configurations (AA' and AB stacking) in as-grown bilayer GaSe crystals are clearly observed at the atomic scale, and the Ga-terminated edge structure was identified using scanning transmission electron microscopy. Theoretical analysis reveals that the energies of the interlayer coupling are responsible for the preferred orientations among the bilayer GaSe crystals. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. A Review of RedOx Cycling of Solid Oxide Fuel Cells Anode

    PubMed Central

    Faes, Antonin; Hessler-Wyser, Aïcha; Zryd, Amédée; Van Herle, Jan

    2012-01-01

    Solid oxide fuel cells are able to convert fuels, including hydrocarbons, to electricity with an unbeatable efficiency even for small systems. One of the main limitations for long-term utilization is the reduction-oxidation cycling (RedOx cycles) of the nickel-based anodes. This paper will review the effects and parameters influencing RedOx cycles of the Ni-ceramic anode. Second, solutions for RedOx instability are reviewed in the patent and open scientific literature. The solutions are described from the point of view of the system, stack design, cell design, new materials and microstructure optimization. Finally, a brief synthesis on RedOx cycling of Ni-based anode supports for standard and optimized microstructures is depicted. PMID:24958298

  12. Design optimization of beta- and photovoltaic conversion devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wichner, R.; Blum, A.; Fischer-Colbrie, E.

    1976-01-08

    This report presents the theoretical and experimental results of an LLL Electronics Engineering research program aimed at optimizing the design and electronic-material parameters of beta- and photovoltaic p-n junction conversion devices. To meet this objective, a comprehensive computer code has been developed that can handle a broad range of practical conditions. The physical model upon which the code is based is described first. Then, an example is given of a set of optimization calculations along with the resulting optimized efficiencies for silicon (Si) and gallium-arsenide (GaAs) devices. The model we have developed, however, is not limited to these materials. Itmore » can handle any appropriate material--single or polycrystalline-- provided energy absorption and electron-transport data are available. To check code validity, the performance of experimental silicon p-n junction devices (produced in-house) were measured under various light intensities and spectra as well as under tritium beta irradiation. The results of these tests were then compared with predicted results based on the known or best estimated device parameters. The comparison showed very good agreement between the calculated and the measured results.« less

  13. Iterative channel decoding of FEC-based multiple-description codes.

    PubMed

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  14. Next-generation acceleration and code optimization for light transport in turbid media using GPUs

    PubMed Central

    Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar

    2010-01-01

    A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498

  15. Mapping the Stacks: Sustainability and User Experience of Animated Maps in Library Discovery Interfaces

    ERIC Educational Resources Information Center

    McMillin, Bill; Gibson, Sally; MacDonald, Jean

    2016-01-01

    Animated maps of the library stacks were integrated into the catalog interface at Pratt Institute and into the EBSCO Discovery Service interface at Illinois State University. The mapping feature was developed for optimal automation of the update process to enable a range of library personnel to update maps and call-number ranges. The development…

  16. Base Flipping in V(D)J Recombination: Insights into the Mechanism of Hairpin Formation, the 12/23 Rule, and the Coordination of Double-Strand Breaks▿ †

    PubMed Central

    Bischerour, Julien; Lu, Catherine; Roth, David B.; Chalmers, Ronald

    2009-01-01

    Tn5 transposase cleaves the transposon end using a hairpin intermediate on the transposon end. This involves a flipped base that is stacked against a tryptophan residue in the protein. However, many other members of the cut-and-paste transposase family, including the RAG1 protein, produce a hairpin on the flanking DNA. We have investigated the reversed polarity of the reaction for RAG recombination. Although the RAG proteins appear to employ a base-flipping mechanism using aromatic residues, the putatively flipped base is not at the expected location and does not appear to stack against any of the said aromatic residues. We propose an alternative model in which a flipped base is accommodated in a nonspecific pocket or cleft within the recombinase. This is consistent with the location of the flipped base at position −1 in the coding flank, which can be occupied by purine or pyrimidine bases that would be difficult to stabilize using a single, highly specific, interaction. Finally, during this work we noticed that the putative base-flipping events on either side of the 12/23 recombination signal sequence paired complex are coupled to the nicking steps and serve to coordinate the double-strand breaks on either side of the complex. PMID:19720743

  17. Effects of fuel processing methods on industrial scale biogas-fuelled solid oxide fuel cell system for operating in wastewater treatment plants

    NASA Astrophysics Data System (ADS)

    Farhad, Siamak; Yoo, Yeong; Hamdullahpur, Feridun

    The performance of three solid oxide fuel cell (SOFC) systems, fuelled by biogas produced through anaerobic digestion (AD) process, for heat and electricity generation in wastewater treatment plants (WWTPs) is studied. Each system has a different fuel processing method to prevent carbon deposition over the anode catalyst under biogas fuelling. Anode gas recirculation (AGR), steam reforming (SR), and partial oxidation (POX) are the methods employed in systems I-III, respectively. A planar SOFC stack used in these systems is based on the anode-supported cells with Ni-YSZ anode, YSZ electrolyte and YSZ-LSM cathode, operated at 800 °C. A computer code has been developed for the simulation of the planar SOFC in cell, stack and system levels and applied for the performance prediction of the SOFC systems. The key operational parameters affecting the performance of the SOFC systems are identified. The effect of these parameters on the electrical and CHP efficiencies, the generated electricity and heat, the total exergy destruction, and the number of cells in SOFC stack of the systems are studied. The results show that among the SOFC systems investigated in this study, the AGR and SR fuel processor-based systems with electrical efficiency of 45.1% and 43%, respectively, are suitable to be applied in WWTPs. If the entire biogas produced in a WWTP is used in the AGR or SR fuel processor-based SOFC system, the electricity and heat required to operate the WWTP can be completely self-supplied and the extra electricity generated can be sold to the electrical grid.

  18. Entropy-Based Bounds On Redundancies Of Huffman Codes

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J.

    1992-01-01

    Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.

  19. Investigation of sulfonated polysulfone membranes as electrolyte in a passive-mode direct methanol fuel cell mini-stack

    NASA Astrophysics Data System (ADS)

    Lufrano, F.; Baglio, V.; Staiti, P.; Stassi, A.; Aricò, A. S.; Antonucci, V.

    This paper reports on the development of polymer electrolyte membranes (PEMs) based on sulfonated polysulfone for application in a DMFC mini-stack operating at room temperature in passive mode. The sulfonated polysulfone (SPSf) with two degrees of sulfonation (57 and 66%) was synthesized by a well-known sulfonation process. SPSf membranes with different thicknesses were prepared and investigated. These membranes were characterized in terms of methanol/water uptake, proton conductivity, and fuel cell performance in a DMFC single cell and mini-stack operating at room temperature. The study addressed (a) control of the synthesis of sulfonated polysulfone, (b) optimization of the assembling procedure, (c) a short lifetime investigation and (d) a comparison of DMFC performance in active-mode operation vs. passive-mode operation. The best passive DMFC performance was 220 mW (average cell power density of about 19 mW cm -2), obtained with a thin SPSf membrane (70 μm) at room temperature, whereas the performance of the same membrane-based DMFC in active mode was 38 mW cm -2. The conductivity of this membrane, SPSf (IEC = 1.34 mequiv. g -1) was 2.8 × 10 -2 S cm -1. A preliminary short-term test (200 min) showed good stability during chrono-amperometry measurements.

  20. Displacement current phenomena in the magnetically insulated transmission lines of the refurbished Z accelerator

    NASA Astrophysics Data System (ADS)

    McBride, R. D.; Jennings, C. A.; Vesey, R. A.; Rochau, G. A.; Savage, M. E.; Stygar, W. A.; Cuneo, M. E.; Sinars, D. B.; Jones, M.; Lechien, K. R.; Lopez, M. R.; Moore, J. K.; Struve, K. W.; Wagoner, T. C.; Waisman, E. M.

    2010-12-01

    Experimental data is presented that illustrates important displacement current phenomena in the magnetically insulated transmission lines (MITLs) of the refurbished Z accelerator [D. V. Rose , Phys. Rev. ST Accel. Beams 13, 010402 (2010)PRABFM1098-440210.1103/PhysRevSTAB.13.010402]. Specifically, we show how displacement current in the MITLs causes significant differences between the accelerator current measured at the vacuum-insulator stack (at a radial position of about 1.6 m from the Z axis of symmetry) and the accelerator current measured at the load (at a radial position of about 6 cm from the Z axis of symmetry). The importance of accounting for these differences was first emphasized by Jennings et al. [C. A. Jennings , IEEE Trans. Plasma Sci. 38, 529 (2010)ITPSBD0093-381310.1109/TPS.2010.2042971], who calculated them using a full transmission-line-equivalent model of the four-level MITL system. However, in the data presented by Jennings et al., many of the interesting displacement current phenomena were obscured by parasitic current losses that occurred between the vacuum-insulator stack and the load (e.g., electron flow across the anode-cathode gap). By contrast, the data presented herein contain very little parasitic current loss, and thus for these low-loss experiments we are able to demonstrate that the differences between the current measured at the stack and the current measured at the load are due primarily to the displacement current that results from the shunt capacitance of the MITLs (about 8.41 nF total). Demonstrating this is important because displacement current is an energy storage mechanism, where energy is stored in the MITL electric fields and can later be used by the system. Thus, even for higher-loss experiments, the differences between the current measured at the stack and the current measured at the load are often largely due to energy storage and subsequent release, as opposed to being due solely to some combination of measurement error and current loss in the MITLs and/or double post-hole convolute. Displacement current also explains why the current measured downstream of the MITLs (i.e., the load current) often exceeds the current measured upstream of the MITLs (i.e., the stack current) at various times in the power pulse (this particular phenomenon was initially thought to be due to timing and/or calibration errors). To facilitate a better understanding of these phenomena, we also introduce and analyze a simple LC circuit model of the MITLs. This model is easily implemented as a simple drive circuit in simulation codes, which has now been done for the LASNEX code [G. B. Zimmerman and W. L. Kruer, Comments Plasma Phys. Controlled Fusion 2, 51 (1975)CPCFBJ0374-2806] at Sandia, as well as for simpler MATLAB®-based codes at Sandia. An example of this LC model used as a drive circuit will also be presented.

  1. Preliminary Development of an Object-Oriented Optimization Tool

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2011-01-01

    The National Aeronautics and Space Administration Dryden Flight Research Center has developed a FORTRAN-based object-oriented optimization (O3) tool that leverages existing tools and practices and allows easy integration and adoption of new state-of-the-art software. The object-oriented framework can integrate the analysis codes for multiple disciplines, as opposed to relying on one code to perform analysis for all disciplines. Optimization can thus take place within each discipline module, or in a loop between the central executive module and the discipline modules, or both. Six sample optimization problems are presented. The first four sample problems are based on simple mathematical equations; the fifth and sixth problems consider a three-bar truss, which is a classical example in structural synthesis. Instructions for preparing input data for the O3 tool are presented.

  2. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Cabral, Hermano A.; He, Jiali

    1997-01-01

    Bootstrap Hybrid Decoding (BHD) (Jelinek and Cocke, 1971) is a coding/decoding scheme that adds extra redundancy to a set of convolutionally encoded codewords and uses this redundancy to provide reliability information to a sequential decoder. Theoretical results indicate that bit error probability performance (BER) of BHD is close to that of Turbo-codes, without some of their drawbacks. In this report we study the use of the Multiple Stack Algorithm (MSA) (Chevillat and Costello, Jr., 1977) as the underlying sequential decoding algorithm in BHD, which makes possible an iterative version of BHD.

  3. Water cycle algorithm: A detailed standard code

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Eskandar, Hadi; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon

    Inspired by the observation of the water cycle process and movements of rivers and streams toward the sea, a population-based metaheuristic algorithm, the water cycle algorithm (WCA) has recently been proposed. Lately, an increasing number of WCA applications have appeared and the WCA has been utilized in different optimization fields. This paper provides detailed open source code for the WCA, of which the performance and efficiency has been demonstrated for solving optimization problems. The WCA has an interesting and simple concept and this paper aims to use its source code to provide a step-by-step explanation of the process it follows.

  4. Revisiting the Ceara Rise, equatorial Atlantic Ocean: isotope stratigraphy of ODP Leg 154

    NASA Astrophysics Data System (ADS)

    Wilkens, Roy; Drury, Anna Joy; Westerhold, Thomas; Lyle, Mitchell; Gorgas, Thomas; Tian, Jun

    2017-04-01

    Isotope stratigraphy has become the method of choice for investigating both past ocean temperatures and global ice volume. Lisiecki and Raymo (2005) published a stacked record of 57 globally distributed benthic δ18O records versus age (LR04 stack). In this study LR04 is compared to high resolution records collected at all of the sites drilled during Ocean Drilling Program (ODP) Leg 154 on the Ceara Rise, in the western equatorial Atlantic Ocean. Newly developed software - the Code for Ocean Drilling Data (CODD) - is used to check data splices of the Ceara sites and better align out-of-splice data with in-splice data. CODD allows to depth and age scaled core images recovered from core table photos enormously facilitating data analysis. The entire splices of ODP Sites 925, 926, 927, 928 and 929 were reviewed. Most changes were minor although several large enough to affect age models based on orbital tuning. We revised the astronomically tuned age model for the Ceara Rise by tuning darker, more clay rich layers to Northern Hemisphere insolation minima. Then we assembled a regional composite benthic stable isotope record from published data. This new Ceara Rise stack provides a new regional reference section for the equatorial Atlantic covering the last 5 million years with an independent age model compared to the non-linear ice volume models of the LR04 stack. Comparison shows that the benthic δ18O composite is consistent with the LR04 stack from 0 - 4 Ma despite a short interval between 1.80 and 1.90 Ma, where LR04 exhibits 2 maxima but where Ceara Rise contains only 1. The interval between 4.0 and 4.5 Ma in the Ceara Rise compilation is decidedly different from LR04, reflecting both the low amplitude of the signal over this interval and the limited amount of data available for the LR04 stack. Our results also point out that precession cycles have been misinterpreted as obliquity in the LR04 stack as suggested by the Ceara Rise composite at 4.2 Ma.

  5. Optimal Sizing Tool for Battery Storage in Grid Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-09-24

    The battery storage sizing tool developed at Pacific Northwest National Laboratory can be used to evaluate economic performance and determine the optimal size of battery storage in different use cases considering multiple power system applications. The considered use cases include i) utility owned battery storage, and ii) battery storage behind customer meter. The power system applications from energy storage include energy arbitrage, balancing services, T&D deferral, outage mitigation, demand charge reduction etc. Most of existing solutions consider only one or two grid services simultaneously, such as balancing service and energy arbitrage. ES-select developed by Sandia and KEMA is able tomore » consider multiple grid services but it stacks the grid services based on priorities instead of co-optimization. This tool is the first one that provides a co-optimization for systematic and local grid services.« less

  6. Optimal Terminal Descent Guidance Logic to Achieve a Soft Lunar Touchdown

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.

    2011-01-01

    Altair Lunar Lander is the linchpin in the Constellation Program for human return to the Moon. In the 2010design reference mission, Altair is to be delivered to low Earth orbit by the Ares V heavy lift launch vehicle, and after subsequent docking with Orion in LEO, the Altair/Orion stack is delivered through trans-lunar injection (TLI). The Altair/Orion stack separates from the Ares V Earth departure stage shortly after TLI and continues the flight to the Moon as a single stack. Fig. 1 depicts one version of the Altair lunar lander.

  7. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    NASA Astrophysics Data System (ADS)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  8. Geometric Patterns for Neighboring Bases Near the Stacked State in Nucleic Acid Strands.

    PubMed

    Sedova, Ada; Banavali, Nilesh K

    2017-03-14

    Structural variation in base stacking has been analyzed frequently in isolated double helical contexts for nucleic acids, but not as often in nonhelical geometries or in complex biomolecular environments. In this study, conformations of two neighboring bases near their stacked state in any environment are comprehensively characterized for single-strand dinucleotide (SSD) nucleic acid crystal structure conformations. An ensemble clustering method is used to identify a reduced set of representative stacking geometries based on pairwise distances between select atoms in consecutive bases, with multiple separable conformational clusters obtained for categories divided by nucleic acid type (DNA/RNA), SSD sequence, stacking face orientation, and the presence or absence of a protein environment. For both DNA and RNA, SSD conformations are observed that are either close to the A-form, or close to the B-form, or intermediate between the two forms, or further away from either form, illustrating the local structural heterogeneity near the stacked state. Among this large variety of distinct conformations, several common stacking patterns are observed between DNA and RNA, and between nucleic acids in isolation or in complex with proteins, suggesting that these might be stable stacking orientations. Noncanonical face/face orientations of the two bases are also observed for neighboring bases in the same strand, but their frequency is much lower, with multiple SSD sequences across categories showing no occurrences of such unusual stacked conformations. The resulting reduced set of stacking geometries is directly useful for stacking-energy comparisons between empirical force fields, prediction of plausible localized variations in single-strand structures near their canonical states, and identification of analogous stacking patterns in newly solved nucleic acid containing structures.

  9. Optimizing Distribution of Pandemic Influenza Antiviral Drugs

    PubMed Central

    Huang, Hsin-Chan; Morton, David P.; Johnson, Gregory P.; Gutfraind, Alexander; Galvani, Alison P.; Clements, Bruce; Meyers, Lauren A.

    2015-01-01

    We provide a data-driven method for optimizing pharmacy-based distribution of antiviral drugs during an influenza pandemic in terms of overall access for a target population and apply it to the state of Texas, USA. We found that during the 2009 influenza pandemic, the Texas Department of State Health Services achieved an estimated statewide access of 88% (proportion of population willing to travel to the nearest dispensing point). However, access reached only 34.5% of US postal code (ZIP code) areas containing <1,000 underinsured persons. Optimized distribution networks increased expected access to 91% overall and 60% in hard-to-reach regions, and 2 or 3 major pharmacy chains achieved near maximal coverage in well-populated areas. Independent pharmacies were essential for reaching ZIP code areas containing <1,000 underinsured persons. This model was developed during a collaboration between academic researchers and public health officials and is available as a decision support tool for Texas Department of State Health Services at a Web-based interface. PMID:25625858

  10. Development of an MRI-compatible digital SiPM detector stack for simultaneous PET/MRI.

    PubMed

    Düppenbecker, Peter M; Weissler, Bjoern; Gebhardt, Pierre; Schug, David; Wehner, Jakob; Marsden, Paul K; Schulz, Volkmar

    2016-02-01

    Advances in solid-state photon detectors paved the way to combine positron emission tomography (PET) and magnetic resonance imaging (MRI) into highly integrated, truly simultaneous, hybrid imaging systems. Based on the most recent digital SiPM technology, we developed an MRI-compatible PET detector stack, intended as a building block for next generation simultaneous PET/MRI systems. Our detector stack comprises an array of 8 × 8 digital SiPM channels with 4 mm pitch using Philips Digital Photon Counting DPC 3200-22 devices, an FPGA for data acquisition, a supply voltage control system and a cooling infrastructure. This is the first detector design that allows the operation of digital SiPMs simultaneously inside an MRI system. We tested and optimized the MRI-compatibility of our detector stack on a laboratory test bench as well as in combination with a Philips Achieva 3 T MRI system. Our design clearly reduces distortions of the static magnetic field compared to a conventional design. The MRI static magnetic field causes weak and directional drift effects on voltage regulators, but has no direct impact on detector performance. MRI gradient switching initially degraded energy and timing resolution. Both distortions could be ascribed to voltage variations induced on the bias and the FPGA core voltage supply respectively. Based on these findings, we improved our detector design and our final design shows virtually no energy or timing degradations, even during heavy and continuous MRI gradient switching. In particular, we found no evidence that the performance of the DPC 3200-22 digital SiPM itself is degraded by the MRI system.

  11. A New Wavelength Optimization and Energy-Saving Scheme Based on Network Coding in Software-Defined WDM-PON Networks

    NASA Astrophysics Data System (ADS)

    Ren, Danping; Wu, Shanshan; Zhang, Lijing

    2016-09-01

    In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.

  12. Development and testing of a contamination potential mapping system for a portion of the General Separations Area, Savannah River Site, South Carolina

    USGS Publications Warehouse

    Rine, J.M.; Berg, R.C.; Shafer, J.M.; Covington, E.R.; Reed, J.K.; Bennett, C.B.; Trudnak, J.E.

    1998-01-01

    A methodology was developed to evaluate and map the contamination potential or aquifer sensitivity of the upper groundwater flow system of a portion of the General Separations Area (GSA) at the Department of Energy's Savannah River Site (SRS) in South Carolina. A Geographic Information System (GIS) was used to integrate diverse subsurface geologic data, soils data, and hydrology utilizing a stack-unit mapping approach to construct mapping layers. This is the first time that such an approach has been used to delineate the hydrogeology of a coastal plain environment. Unit surface elevation maps were constructed for the tops of six Tertiary units derived from over 200 boring logs. Thickness or isopach maps were created for five hydrogeologic units by differencing top and basal surface elevations. The geologic stack-unit map was created by stacking the five isopach maps and adding codes for each stack-unit polygon. Stacked-units were rated according to their hydrogeologic properties and ranked using a logarithmic approach (utility theory) to establish a contamination potential index. Colors were assigned to help display relative importance of stacked-units in preventing or promoting transport of contaminants. The sensitivity assessment included the effects of surface soils on contaminants which are particularly important for evaluating potential effects from surface spills. Hydrogeologic/hydrologic factors did not exhibit sufficient spatial variation to warrant incorporation into contamination potential assessment. Development of this contamination potential mapping system provides a useful tool for site planners, environmental scientists, and regulatory agencies.A methodology was developed to evaluate and map the contamination potential or aquifer sensitivity of the upper groundwater flow system of a portion of the General Separations Area (GSA) at the Department of Energy's Savannah River Site (SRS) in South Carolina. A Geographic Information System (GIS) was used to integrate diverse subsurface geologic data, soils data, and hydrology utilizing a stack-unit mapping approach to construct mapping layers. This is the first time that such an approach has been used to delineate the hydrogeology of a coastal plain environment. Unit surface elevation maps were constructed for the tops of six Tertiary units derived from over 200 boring logs. Thickness or isopach maps were created for five hydrogeologic units by differencing top and basal surface elevations. The geologic stack-unit map was created by stacking the five isopach maps and adding codes for each stack-unit polygon. Stacked-units were rated according to their hydrogeologic properties and ranked using a logarithmic approach (utility theory) to establish a contamination potential index. Colors were assigned to help display relative importance of stacked-units in preventing or promoting transport of contaminants. The sensitivity assessment included the effects of surface soils on contaminants which are particularly important for evaluating potential effects from surface spills. Hydrogeologic/hydrologic factors did not exhibit sufficient spatial variation to warrant incorporation into contamination potential assessment. Development of this contamination potential mapping system provides a useful tool for site planners, environmental scientists, and regulatory agencies.

  13. Development of a lithium fluoride zinc sulfide based neutron multiplicity counter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cowles, Christian; Behling, Spencer; Baldez, Phoenix

    Here, the feasibility of a full-scale lithium fluoride zinc sulfide (LiF/ZnS) based neutron multiplicity counter has been demonstrated. The counter was constructed of modular neutron detecting stacks that each contain five sheets of LiF/ZnS interleaved between six sheets of wavelength shifting plastic with a photomultiplier tube on each end. Twelve such detector stacks were placed around a sample chamber in a square arrangement with lithiated high-density polyethylene blocks in the corners to reflect high-energy neutrons and capture low-energy neutrons. The final system design was optimized via modeling and small-scale test. Measuring neutrons from a 252Cf source, the counter achieved amore » 36% neutron detection efficiency (ϵϵ) and an View the MathML source11.7μs neutron die-away time (ττ) for a doubles figure-of-merit (ϵ 2/τ) of 109. This is the highest doubles figure-of-merit measured to-date for a 3He-free neutron multiplicity counter.« less

  14. Layout designs of surface barrier coatings for boosting the capability of oxygen/vapor obstruction utilized in flexible electronics

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Chun; Huang, Pei-Chen; He, Jing-Yan

    2018-04-01

    Organic light-emitting diode-based flexible and rollable displays have become a promising candidate for next-generation flexible electronics. For this reason, the design of surface multi-layered barriers should be optimized to enhance the long-term mechanical reliability of a flexible encapsulation that prevents the penetration of oxygen and vapor. In this study, finite element-based stress simulation was proposed to estimate the mechanical reliability of gas/vapor barrier design with low-k/silicon nitride (low-k/SiNx) stacking architecture. Consequently, stress-induced failure of critical thin films within the flexible display under various bending conditions must be considered. The feasibility of one pair SiO2/SiNx barrier design, which overcomes the complex lamination process, and the critical bending radius, which is decreased to 1.22 mm, were also examined. In addition, the influence of distance between neutral axes to the concerned layer surface dominated the induced-stress magnitude rather than the stress compliant mechanism provided from stacked low-k films.

  15. Development of a lithium fluoride zinc sulfide based neutron multiplicity counter

    NASA Astrophysics Data System (ADS)

    Cowles, Christian; Behling, Spencer; Baldez, Phoenix; Folsom, Micah; Kouzes, Richard; Kukharev, Vladislav; Lintereur, Azaree; Robinson, Sean; Siciliano, Edward; Stave, Sean; Valdez, Patrick

    2018-04-01

    The feasibility of a full-scale lithium fluoride zinc sulfide (LiF/ZnS) based neutron multiplicity counter has been demonstrated. The counter was constructed of modular neutron detecting stacks that each contain five sheets of LiF/ZnS interleaved between six sheets of wavelength shifting plastic with a photomultiplier tube on each end. Twelve such detector stacks were placed around a sample chamber in a square arrangement with lithiated high-density polyethylene blocks in the corners to reflect high-energy neutrons and capture low-energy neutrons. The final system design was optimized via modeling and small-scale test. Measuring neutrons from a 252Cf source, the counter achieved a 36% neutron detection efficiency (ɛ) and an 11 . 7 μs neutron die-away time (τ) for a doubles figure-of-merit (ɛ2 / τ) of 109. This is the highest doubles figure-of-merit measured to-date for a 3He-free neutron multiplicity counter.

  16. Development of a lithium fluoride zinc sulfide based neutron multiplicity counter

    DOE PAGES

    Cowles, Christian; Behling, Spencer; Baldez, Phoenix; ...

    2018-01-12

    Here, the feasibility of a full-scale lithium fluoride zinc sulfide (LiF/ZnS) based neutron multiplicity counter has been demonstrated. The counter was constructed of modular neutron detecting stacks that each contain five sheets of LiF/ZnS interleaved between six sheets of wavelength shifting plastic with a photomultiplier tube on each end. Twelve such detector stacks were placed around a sample chamber in a square arrangement with lithiated high-density polyethylene blocks in the corners to reflect high-energy neutrons and capture low-energy neutrons. The final system design was optimized via modeling and small-scale test. Measuring neutrons from a 252Cf source, the counter achieved amore » 36% neutron detection efficiency (ϵϵ) and an View the MathML source11.7μs neutron die-away time (ττ) for a doubles figure-of-merit (ϵ 2/τ) of 109. This is the highest doubles figure-of-merit measured to-date for a 3He-free neutron multiplicity counter.« less

  17. Design of a Minimum Surface-Effect Tendon-Based Microactuator for Micromanipulation

    NASA Technical Reports Server (NTRS)

    Goldfarb, Michael; Lipsey, James H.

    1997-01-01

    A piezoelectric (PZT) stack-based actuator was developed to provide a means of actuation with dynamic characteristics appropriate for small-scale manipulation. In particular, the design incorporates a highly nonlinear, large-ratio transmission that provides approximately two orders of magnitude motion amplification from the PZT stack. In addition to motion amplification, the nonlinear transmission was designed via optimization methods to distort the highly non-uniform properties of a piezoelectric actuator so that the achievable actuation force is nearly constant throughout the actuator workspace. The package also includes sensors that independently measure actuator output force and displacement, so that a manipulator structure need not incorporate sensors nor the associated wires. Specifically, the actuator was designed to output a maximum force of at least one Newton through a stroke of at least one millimeter. For purposes of small-scale precision position and/or force control, the actuator/sensor package was designed to eliminate stick-slip friction and backlash. The overall dimensions of the actuator/sensor package are approximately 40 x 65 x 25 mm.

  18. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  19. Towards fully spray coated organic light emitting devices

    NASA Astrophysics Data System (ADS)

    Gilissen, Koen; Stryckers, Jeroen; Manca, Jean; Deferme, Wim

    2014-10-01

    Pi-conjugated polymer light emitting devices have the potential to be the next generation of solid state lighting. In order to achieve this goal, a low cost, efficient and large area production process is essential. Polymer based light emitting devices are generally deposited using techniques based on solution processing e.g.: spin coating, ink jet printing. These techniques are not well suited for cost-effective, high throughput, large area mass production of these organic devices. Ultrasonic spray deposition however, is a deposition technique that is fast, efficient and roll to roll compatible which can be easily scaled up for the production of large area polymer light emitting devices (PLEDs). This deposition technique has already successfully been employed to produce organic photovoltaic devices (OPV)1. Recently the electron blocking layer PEDOT:PSS2 and metal top contact3 have been successfully spray coated as part of the organic photovoltaic device stack. In this study, the effects of ultrasonic spray deposition of polymer light emitting devices are investigated. For the first time - to our knowledge -, spray coating of the active layer in PLED is demonstrated. Different solvents are tested to achieve the best possible spray-able dispersion. The active layer morphology is characterized and optimized to produce uniform films with optimal thickness. Furthermore these ultrasonic spray coated films are incorporated in the polymer light emitting device stack to investigate the device characteristics and efficiency. Our results show that after careful optimization of the active layer, ultrasonic spray coating is prime candidate as deposition technique for mass production of PLEDs.

  20. 5-D interpolation with wave-front attributes

    NASA Astrophysics Data System (ADS)

    Xie, Yujiang; Gajewski, Dirk

    2017-11-01

    Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that there are significant advantages for steep dipping events using the 5-D WABI method when compared to the rank-reduction-based 5-D interpolation technique. Diffraction tails substantially benefit from this improved performance of the partial CRS stacking approach while the CPU time is comparable to the CPU time consumed by the rank-reduction-based method.

  1. Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi

    A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.

  2. Low complexity Reed-Solomon-based low-density parity-check design for software defined optical transmission system based on adaptive puncturing decoding algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua

    2016-08-01

    We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.

  3. Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2014-05-01

    The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.

  4. Optimum stacking sequence design of laminated composite circular plates with curvilinear fibres by a layer-wise optimization method

    NASA Astrophysics Data System (ADS)

    Guenanou, A.; Houmat, A.

    2018-05-01

    The optimum stacking sequence design for the maximum fundamental frequency of symmetrically laminated composite circular plates with curvilinear fibres is investigated for the first time using a layer-wise optimization method. The design variables are two fibre orientation angles per layer. The fibre paths are constructed using the method of shifted paths. The first-order shear deformation plate theory and a curved square p-element are used to calculate the objective function. The blending function method is used to model accurately the geometry of the circular plate. The equations of motion are derived using Lagrange's method. The numerical results are validated by means of a convergence test and comparison with published values for symmetrically laminated composite circular plates with rectilinear fibres. The material parameters, boundary conditions, number of layers and thickness are shown to influence the optimum solutions to different extents. The results should serve as a benchmark for optimum stacking sequences of symmetrically laminated composite circular plates with curvilinear fibres.

  5. Legacy Code Modernization

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.

  6. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    PubMed

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  7. Statistical mechanics of broadcast channels using low-density parity-check codes.

    PubMed

    Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David

    2003-03-01

    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

  8. Evaluation of Savannah River Plant emergency response models using standard and nonstandard meteorological data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoel, D.D.

    1984-01-01

    Two computer codes have been developed for operational use in performing real time evaluations of atmospheric releases from the Savannah River Plant (SRP) in South Carolina. These codes, based on mathematical models, are part of the SRP WIND (Weather Information and Display) automated emergency response system. Accuracy of ground level concentrations from a Gaussian puff-plume model and a two-dimensional sequential puff model are being evaluated with data from a series of short range diffusion experiments using sulfur hexafluoride as a tracer. The models use meteorological data collected from 7 towers on SRP and at the 300 m WJBF-TV tower aboutmore » 15 km northwest of SRP. The winds and the stability, which is based on turbulence measurements, are measured at the 60 m stack heights. These results are compared to downwind concentrations using only standard meteorological data, i.e., adjusted 10 m winds and stability determined by the Pasquill-Turner stability classification method. Scattergrams and simple statistics were used for model evaluations. Results indicate predictions within accepted limits for the puff-plume code and a bias in the sequential puff model predictions using the meteorologist-adjusted nonstandard data. 5 references, 4 figures, 2 tables.« less

  9. Computational models of location-invariant orthographic processing

    NASA Astrophysics Data System (ADS)

    Dandurand, Frédéric; Hannagan, Thomas; Grainger, Jonathan

    2013-03-01

    We trained three topologies of backpropagation neural networks to discriminate 2000 words (lexical representations) presented at different positions of a horizontal letter array. The first topology (zero-deck) contains no hidden layer, the second (one-deck) has a single hidden layer, and for the last topology (two-deck), the task is divided in two subtasks implemented as two stacked neural networks, with explicit word-centred letters as intermediate representations. All topologies successfully simulated two key benchmark phenomena observed in skilled human reading: transposed-letter priming and relative-position priming. However, the two-deck topology most accurately simulated the ability to discriminate words from nonwords, while containing the fewest connection weights. We analysed the internal representations after training. Zero-deck networks implement a letter-based scheme with a position bias to differentiate anagrams. One-deck networks implement a holographic overlap coding in which representations are essentially letter-based and words are linear combinations of letters. Two-deck networks also implement holographic-coding.

  10. New horizon for high performance Mg-based biomaterial with uniform degradation behavior: Formation of stacking faults

    PubMed Central

    Zhang, Jinghuai; Xu, Chi; Jing, Yongbin; Lv, Shuhui; Liu, Shujuan; Fang, Daqing; Zhuang, Jinpeng; Zhang, Milin; Wu, Ruizhi

    2015-01-01

    Designing the new microstructure is an effective way to accelerate the biomedical application of magnesium (Mg) alloys. In this study, a novel Mg–8Er–1Zn alloy with profuse nano-spaced basal plane stacking faults (SFs) was prepared by combined processes of direct-chill semi-continuous casting, heat-treatment and hot-extrusion. The formation of SFs made the alloy possess outstanding comprehensive performance as the biodegradable implant material. The ultimate tensile strength (UTS: 318 MPa), tensile yield strength (TYS: 207 MPa) and elongation (21%) of the alloy with SFs were superior to those of most reported degradable Mg-based alloys. This new alloy showed acceptable biotoxicity and degradation rate (0.34 mm/year), and the latter could be further slowed down through optimizing the microstructure. Most amazing of all, the uniquely uniform in vitro/vivo corrosion behavior was obtained due to the formation of SFs. Accordingly we proposed an original corrosion mechanism for the novel Mg alloy with SFs. The present study opens a new horizon for developing new Mg-based biomaterials with highly desirable performances. PMID:26349676

  11. A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong

    2013-01-01

    Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.

  12. Effect of Al2O3 insulator thickness on the structural integrity of amorphous indium-gallium-zinc-oxide based thin film transistors.

    PubMed

    Kim, Hak-Jun; Hwang, In-Ju; Kim, Youn-Jea

    2014-12-01

    The current transparent oxide semiconductors (TOSs) technology provides flexibility and high performance. In this study, multi-stack nano-layers of TOSs were designed for three-dimensional analysis of amorphous indium-gallium-zinc-oxide (a-IGZO) based thin film transistors (TFTs). In particular, the effects of torsional and compressive stresses on the nano-sized active layers such as the a-IGZO layer were investigated. Numerical simulations were carried out to investigate the structural integrity of a-IGZO based TFTs with three different thicknesses of the aluminum oxide (Al2O3) insulator (δ = 10, 20, and 30 nm), respectively, using a commercial code, COMSOL Multiphysics. The results are graphically depicted for operating conditions.

  13. Reference View Selection in DIBR-Based Multiview Coding.

    PubMed

    Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice

    2016-04-01

    Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras.

  14. Evaluating statistical consistency in the ocean model component of the Community Earth System Model (pyCECT v2.0)

    NASA Astrophysics Data System (ADS)

    Baker, Allison H.; Hu, Yong; Hammerling, Dorit M.; Tseng, Yu-heng; Xu, Haiying; Huang, Xiaomeng; Bryan, Frank O.; Yang, Guangwen

    2016-07-01

    The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our experiments indicate that the new POP ensemble consistency test (POP-ECT) tool is capable of distinguishing cases that should be statistically consistent with the ensemble and those that should not, as well as providing a simple, subjective and systematic way to detect errors in CESM-POP due to the hardware or software stack, positively contributing to quality assurance for the CESM-POP code.

  15. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  16. Heuristic rules embedded genetic algorithm for in-core fuel management optimization

    NASA Astrophysics Data System (ADS)

    Alim, Fatih

    The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.

  17. First-Principles Quantum Dynamics of Singlet Fission: Coherent versus Thermally Activated Mechanisms Governed by Molecular π Stacking

    NASA Astrophysics Data System (ADS)

    Tamura, Hiroyuki; Huix-Rotllant, Miquel; Burghardt, Irene; Olivier, Yoann; Beljonne, David

    2015-09-01

    Singlet excitons in π -stacked molecular crystals can split into two triplet excitons in a process called singlet fission that opens a route to carrier multiplication in photovoltaics. To resolve controversies about the mechanism of singlet fission, we have developed a first principles nonadiabatic quantum dynamical model that reveals the critical role of molecular stacking symmetry and provides a unified picture of coherent versus thermally activated singlet fission mechanisms in different acenes. The slip-stacked equilibrium packing structure of pentacene derivatives is found to enhance ultrafast singlet fission mediated by a coherent superexchange mechanism via higher-lying charge transfer states. By contrast, the electronic couplings for singlet fission strictly vanish at the C2 h symmetric equilibrium π stacking of rubrene. In this case, singlet fission is driven by excitations of symmetry-breaking intermolecular vibrations, rationalizing the experimentally observed temperature dependence. Design rules for optimal singlet fission materials therefore need to account for the interplay of molecular π -stacking symmetry and phonon-induced coherent or thermally activated mechanisms.

  18. Stacking with stochastic cooling

    NASA Astrophysics Data System (ADS)

    Caspers, Fritz; Möhl, Dieter

    2004-10-01

    Accumulation of large stacks of antiprotons or ions with the aid of stochastic cooling is more delicate than cooling a constant intensity beam. Basically the difficulty stems from the fact that the optimized gain and the cooling rate are inversely proportional to the number of particles 'seen' by the cooling system. Therefore, to maintain fast stacking, the newly injected batch has to be strongly 'protected' from the Schottky noise of the stack. Vice versa the stack has to be efficiently 'shielded' against the high gain cooling system for the injected beam. In the antiproton accumulators with stacking ratios up to 105 the problem is solved by radial separation of the injection and the stack orbits in a region of large dispersion. An array of several tapered cooling systems with a matched gain profile provides a continuous particle flux towards the high-density stack core. Shielding of the different systems from each other is obtained both through the spatial separation and via the revolution frequencies (filters). In the 'old AA', where the antiproton collection and stacking was done in one single ring, the injected beam was further shielded during cooling by means of a movable shutter. The complexity of these systems is very high. For more modest stacking ratios, one might use azimuthal rather than radial separation of stack and injected beam. Schematically half of the circumference would be used to accept and cool new beam and the remainder to house the stack. Fast gating is then required between the high gain cooling of the injected beam and the low gain stack cooling. RF-gymnastics are used to merge the pre-cooled batch with the stack, to re-create free space for the next injection, and to capture the new batch. This scheme is less demanding for the storage ring lattice, but at the expense of some reduction in stacking rate. The talk reviews the 'radial' separation schemes and also gives some considerations to the 'azimuthal' schemes.

  19. CUBE: Information-optimized parallel cosmological N-body simulation code

    NASA Astrophysics Data System (ADS)

    Yu, Hao-Ran; Pen, Ue-Li; Wang, Xin

    2018-05-01

    CUBE, written in Coarray Fortran, is a particle-mesh based parallel cosmological N-body simulation code. The memory usage of CUBE can approach as low as 6 bytes per particle. Particle pairwise (PP) force, cosmological neutrinos, spherical overdensity (SO) halofinder are included.

  20. A Deep Ensemble Learning Method for Monaural Speech Separation.

    PubMed

    Zhang, Xiao-Lei; Wang, DeLiang

    2016-03-01

    Monaural speech separation is a fundamental problem in robust speech processing. Recently, deep neural network (DNN)-based speech separation methods, which predict either clean speech or an ideal time-frequency mask, have demonstrated remarkable performance improvement. However, a single DNN with a given window length does not leverage contextual information sufficiently, and the differences between the two optimization objectives are not well understood. In this paper, we propose a deep ensemble method, named multicontext networks, to address monaural speech separation. The first multicontext network averages the outputs of multiple DNNs whose inputs employ different window lengths. The second multicontext network is a stack of multiple DNNs. Each DNN in a module of the stack takes the concatenation of original acoustic features and expansion of the soft output of the lower module as its input, and predicts the ratio mask of the target speaker; the DNNs in the same module employ different contexts. We have conducted extensive experiments with three speech corpora. The results demonstrate the effectiveness of the proposed method. We have also compared the two optimization objectives systematically and found that predicting the ideal time-frequency mask is more efficient in utilizing clean training speech, while predicting clean speech is less sensitive to SNR variations.

  1. Automatic optimization high-speed high-resolution OCT retinal imaging at 1μm

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Liu, Xiyun; Miao, Dongkai; Lee, Sujin; Lee, Sieun; Bonora, Stefano; Zawadzki, Robert J.; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.

    2015-03-01

    High-resolution OCT retinal imaging is important in providing visualization of various retinal structures to aid researchers in better understanding the pathogenesis of vision-robbing diseases. However, conventional optical coherence tomography (OCT) systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking optical coherence tomography (OCT) system with automatic optimization for high-resolution, extended-focal-range clinical retinal imaging. A variable-focus liquid lens was added to correct for de-focus in real-time. A GPU-accelerated segmentation and optimization was used to provide real-time layer-specific enface visualization as well as depth-specific focus adjustment. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the ONH, from which we extracted clinically-relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  2. Probabilistic source mechanism estimation based on body-wave waveforms through shift and stack algorithm

    NASA Astrophysics Data System (ADS)

    Massin, F.; Malcolm, A. E.

    2017-12-01

    Knowing earthquake source mechanisms gives valuable information for earthquake response planning and hazard mitigation. Earthquake source mechanisms can be analyzed using long period waveform inversion (for moderate size sources with sufficient signal to noise ratio) and body-wave first motion polarity or amplitude ratio inversion (for micro-earthquakes with sufficient data coverage). A robust approach that gives both source mechanisms and their associated probabilities across all source scales would greatly simplify the determination of source mechanisms and allow for more consistent interpretations of the results. Following previous work on shift and stack approaches, we develop such a probabilistic source mechanism analysis, using waveforms, which does not require polarity picking. For a given source mechanism, the first period of the observed body-waves is selected for all stations, multiplied by their corresponding theoretical polarity and stacked together. (The first period is found from a manually picked travel time by measuring the central period where the signal power is concentrated, using the second moment of the power spectral density function.) As in other shift and stack approaches, our method is not based on the optimization of an objective function through an inversion. Instead, the power of the polarity-corrected stack is a proxy for the likelihood of the trial source mechanism, with the most powerful stack corresponding to the most likely source mechanism. Using synthetic data, we test our method for robustness to the data coverage, coverage gap, signal to noise ratio, travel-time picking errors and non-double couple component. We then present results for field data in a volcano-tectonic context. Our results are reliable when constrained by 15 body-wavelets, with gap below 150 degrees, signal to noise ratio over 1 and arrival time error below a fifth of the period (0.2T) of the body-wave. We demonstrate that the source scanning approach for source mechanism analysis has similar advantages to waveform inversion (full waveform data, no manual intervention, probabilistic approach) and similar applicability to polarity inversion (any source size, any instrument type).

  3. Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D

    NASA Technical Reports Server (NTRS)

    Carle, Alan; Fagan, Mike; Green, Lawrence L.

    1998-01-01

    This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.

  4. ALICE HLT Run 2 performance overview.

    NASA Astrophysics Data System (ADS)

    Krzewicki, Mikolaj; Lindenstruth, Volker; ALICE Collaboration

    2017-10-01

    For the LHC Run 2 the ALICE HLT architecture was consolidated to comply with the upgraded ALICE detector readout technology. The software framework was optimized and extended to cope with the increased data load. Online calibration of the TPC using online tracking capabilities of the ALICE HLT was deployed. Offline calibration code was adapted to run both online and offline and the HLT framework was extended to support that. The performance of this schema is important for Run 3 related developments. An additional data transport approach was developed using the ZeroMQ library, forming at the same time a test bed for the new data flow model of the O2 system, where further development of this concept is ongoing. This messaging technology was used to implement the calibration feedback loop augmenting the existing, graph oriented HLT transport framework. Utilising the online reconstruction of many detectors, a new asynchronous monitoring scheme was developed to allow real-time monitoring of the physics performance of the ALICE detector, on top of the new messaging scheme for both internal and external communication. Spare computing resources comprising the production and development clusters are run as a tier-2 GRID site using an OpenStack-based setup. The development cluster is running continuously, the production cluster contributes resources opportunistically during periods of LHC inactivity.

  5. MODTOHAFSD — A GUI based JAVA code for gravity analysis of strike limited sedimentary basins by means of growing bodies with exponential density contrast-depth variation: A space domain approach

    NASA Astrophysics Data System (ADS)

    Chakravarthi, V.; Sastry, S. Rajeswara; Ramamma, B.

    2013-07-01

    Based on the principles of modeling and inversion, two interpretation methods are developed in the space domain along with a GUI based JAVA code, MODTOHAFSD, to analyze the gravity anomalies of strike limited sedimentary basins using a prescribed exponential density contrast-depth function. A stack of vertical prisms all having equal widths, but each one possesses its own limited strike length and thickness, describes the structure of a sedimentary basin above the basement complex. The thicknesses of prisms represent the depths to the basement and are the unknown parameters to be estimated from the observed gravity anomalies. Forward modeling is realized in the space domain using a combination of analytical and numerical approaches. The algorithm estimates the initial depths of a sedimentary basin and improves them, iteratively, based on the differences between the observed and modeled gravity anomalies within the specified convergence criteria. The present code, works on Model-View-Controller (MVC) pattern, reads the Bouguer gravity anomalies, constructs/modifies regional gravity background in an interactive approach, estimates residual gravity anomalies and performs automatic modeling or inversion based on user specification for basement topography. Besides generating output in both ASCII and graphical forms, the code displays (i) the changes in the depth structure, (ii) nature of fit between the observed and modeled gravity anomalies, (iii) changes in misfit, and (iv) variation of density contrast with iteration in animated forms. The code is used to analyze both synthetic and real field gravity anomalies. The proposed technique yielded information that is consistent with the assumed parameters in case of synthetic structure and with available drilling depths in case of field example. The advantage of the code is that it can be used to analyze the gravity anomalies of sedimentary basins even when the profile along which the interpretation is intended fails to bisect the strike length.

  6. HOMAR: A computer code for generating homotopic grids using algebraic relations: User's manual

    NASA Technical Reports Server (NTRS)

    Moitra, Anutosh

    1989-01-01

    A computer code for fast automatic generation of quasi-three-dimensional grid systems for aerospace configurations is described. The code employs a homotopic method to algebraically generate two-dimensional grids in cross-sectional planes, which are stacked to produce a three-dimensional grid system. Implementation of the algebraic equivalents of the homotopic relations for generating body geometries and grids are explained. Procedures for controlling grid orthogonality and distortion are described. Test cases with description and specification of inputs are presented in detail. The FORTRAN computer program and notes on implementation and use are included.

  7. Sequence-Dependent Elasticity and Electrostatics of Single-Stranded DNA: Signatures of Base-Stacking

    PubMed Central

    McIntosh, Dustin B.; Duggan, Gina; Gouil, Quentin; Saleh, Omar A.

    2014-01-01

    Base-stacking is a key factor in the energetics that determines nucleic acid structure. We measure the tensile response of single-stranded DNA as a function of sequence and monovalent salt concentration to examine the effects of base-stacking on the mechanical and thermodynamic properties of single-stranded DNA. By comparing the elastic response of highly stacked poly(dA) and that of a polypyrimidine sequence with minimal stacking, we find that base-stacking in poly(dA) significantly enhances the polymer’s rigidity. The unstacking transition of poly(dA) at high force reveals that the intrinsic electrostatic tension on the molecule varies significantly more weakly on salt concentration than mean-field predictions. Further, we provide a model-independent estimate of the free energy difference between stacked poly(dA) and unstacked polypyrimidine, finding it to be ∼−0.25 kBT/base and nearly constant over three orders of magnitude in salt concentration. PMID:24507606

  8. Charliecloud: Unprivileged containers for user-defined software stacks in HPC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Priedhorsky, Reid; Randles, Timothy C.

    Supercomputing centers are seeing increasing demand for user-defined software stacks (UDSS), instead of or in addition to the stack provided by the center. These UDSS support user needs such as complex dependencies or build requirements, externally required configurations, portability, and consistency. The challenge for centers is to provide these services in a usable manner while minimizing the risks: security, support burden, missing functionality, and performance. We present Charliecloud, which uses the Linux user and mount namespaces to run industry-standard Docker containers with no privileged operations or daemons on center resources. Our simple approach avoids most security risks while maintaining accessmore » to the performance and functionality already on offer, doing so in less than 500 lines of code. Charliecloud promises to bring an industry-standard UDSS user workflow to existing, minimally altered HPC resources.« less

  9. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines

    PubMed Central

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-01-01

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660

  10. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines.

    PubMed

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-08-27

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.

  11. The multi-scattering model for calculations of positron spatial distribution in the multilayer stacks, useful for conventional positron measurements

    NASA Astrophysics Data System (ADS)

    Dryzek, Jerzy; Siemek, Krzysztof

    2013-08-01

    The spatial distribution of positrons emitted from radioactive isotopes into stacks or layered samples is a subject of the presented report. It was found that Monte Carlo (MC) simulations using GEANT4 code are not able to describe correctly the experimental data of the positron fractions in stacks. The mathematical model was proposed for calculations of the implantation profile or positron fractions in separated layers or foils being components of a stack. The model takes into account only two processes, i.e., the positron absorption and backscattering at interfaces. The mathematical formulas were applied in the computer program called LYS-1 (layers profile analysis). The theoretical predictions of the model were in the good agreement with the results of the MC simulations for the semi infinite sample. The experimental verifications of the model were performed on the symmetrical and non-symmetrical stacks of different foils. The good agreement between the experimental and calculated fractions of positrons in components of a stack was achieved. Also the experimental implantation profile obtained using the depth scanning of positron implantation technique is very well described by the theoretical profile obtained within the proposed model. The LYS-1 program allows us also to calculate the fraction of positrons which annihilate in the source, which can be useful in the positron spectroscopy.

  12. MetaJC++: A flexible and automatic program transformation technique using meta framework

    NASA Astrophysics Data System (ADS)

    Beevi, Nadera S.; Reghu, M.; Chitraprasad, D.; Vinodchandra, S. S.

    2014-09-01

    Compiler is a tool to translate abstract code containing natural language terms to machine code. Meta compilers are available to compile more than one languages. We have developed a meta framework intends to combine two dissimilar programming languages, namely C++ and Java to provide a flexible object oriented programming platform for the user. Suitable constructs from both the languages have been combined, thereby forming a new and stronger Meta-Language. The framework is developed using the compiler writing tools, Flex and Yacc to design the front end of the compiler. The lexer and parser have been developed to accommodate the complete keyword set and syntax set of both the languages. Two intermediate representations have been used in between the translation of the source program to machine code. Abstract Syntax Tree has been used as a high level intermediate representation that preserves the hierarchical properties of the source program. A new machine-independent stack-based byte-code has also been devised to act as a low level intermediate representation. The byte-code is essentially organised into an output class file that can be used to produce an interpreted output. The results especially in the spheres of providing C++ concepts in Java have given an insight regarding the potential strong features of the resultant meta-language.

  13. Characterization of diode-laser stacks for high-energy-class solid state lasers

    NASA Astrophysics Data System (ADS)

    Pilar, Jan; Sikocinski, Pawel; Pranowicz, Alina; Divoky, Martin; Crump, P.; Staske, R.; Lucianetti, Antonio; Mocek, Tomas

    2014-03-01

    In this work, we present a comparative study of high power diode stacks produced by world's leading manufacturers such as DILAS, Jenoptik, and Quantel. The diode-laser stacks are characterized by central wavelength around 939 nm, duty cycle of 1 %, and maximum repetition rate of 10 Hz. The characterization includes peak power, electrical-to-optical efficiency, central wavelength and full width at half maximum (FWHM) as a function of diode current and cooling temperature. A cross-check of measurements performed at HiLASE-IoP and Ferdinand-Braun-Institut (FBH) shows very good agreement between the results. Our study reveals also the presence of discontinuities in the spectra of two diode stacks. We consider the results presented here a valuable tool to optimize pump sources for ultra-high average power lasers, including laser fusion facilities.

  14. Overall Traveling-Wave-Tube Efficiency Improved By Optimized Multistage Depressed Collector Design

    NASA Technical Reports Server (NTRS)

    Vaden, Karl R.

    2002-01-01

    Depressed Collector Design The microwave traveling wave tube (TWT) is used widely for space communications and high-power airborne transmitting sources. One of the most important features in designing a TWT is overall efficiency. Yet, overall TWT efficiency is strongly dependent on the efficiency of the electron beam collector, particularly for high values of collector efficiency. For these reasons, the NASA Glenn Research Center developed an optimization algorithm based on simulated annealing to quickly design highly efficient multistage depressed collectors (MDC's). Simulated annealing is a strategy for solving highly nonlinear combinatorial optimization problems. Its major advantage over other methods is its ability to avoid becoming trapped in local minima. Simulated annealing is based on an analogy to statistical thermodynamics, specifically the physical process of annealing: heating a material to a temperature that permits many atomic rearrangements and then cooling it carefully and slowly, until it freezes into a strong, minimum-energy crystalline structure. This minimum energy crystal corresponds to the optimal solution of a mathematical optimization problem. The TWT used as a baseline for optimization was the 32-GHz, 10-W, helical TWT developed for the Cassini mission to Saturn. The method of collector analysis and design used was a 2-1/2-dimensional computational procedure that employs two types of codes, a large signal analysis code and an electron trajectory code. The large signal analysis code produces the spatial, energetic, and temporal distributions of the spent beam entering the MDC. An electron trajectory code uses the resultant data to perform the actual collector analysis. The MDC was optimized for maximum MDC efficiency and minimum final kinetic energy of all collected electrons (to reduce heat transfer). The preceding figure shows the geometric and electrical configuration of an optimized collector with an efficiency of 93.8 percent. The results show the improvement in collector efficiency from 89.7 to 93.8 percent, resulting in an increase of three overall efficiency points. In addition, the time to design a highly efficient MDC was reduced from a month to a few days. All work was done in-house at Glenn for the High Rate Data Delivery Program. Future plans include optimizing the MDC and TWT interaction circuit in tandem to further improve overall TWT efficiency.

  15. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    PubMed

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. FBCOT: a fast block coding option for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).

  17. Optimizing Aspect-Oriented Mechanisms for Embedded Applications

    NASA Astrophysics Data System (ADS)

    Hundt, Christine; Stöhr, Daniel; Glesner, Sabine

    As applications for small embedded mobile devices are getting larger and more complex, it becomes inevitable to adopt more advanced software engineering methods from the field of desktop application development. Aspect-oriented programming (AOP) is a promising approach due to its advanced modularization capabilities. However, existing AOP languages tend to add a substantial overhead in both execution time and code size which restricts their practicality for small devices with limited resources. In this paper, we present optimizations for aspect-oriented mechanisms at the level of the virtual machine. Our experiments show that these optimizations yield a considerable performance gain along with a reduction of the code size. Thus, our optimizations establish the base for using advanced aspect-oriented modularization techniques for developing Java applications on small embedded devices.

  18. A stimulus-dependent spike threshold is an optimal neural coder

    PubMed Central

    Jones, Douglas L.; Johnson, Erik C.; Ratnam, Rama

    2015-01-01

    A neural code based on sequences of spikes can consume a significant portion of the brain's energy budget. Thus, energy considerations would dictate that spiking activity be kept as low as possible. However, a high spike-rate improves the coding and representation of signals in spike trains, particularly in sensory systems. These are competing demands, and selective pressure has presumably worked to optimize coding by apportioning a minimum number of spikes so as to maximize coding fidelity. The mechanisms by which a neuron generates spikes while maintaining a fidelity criterion are not known. Here, we show that a signal-dependent neural threshold, similar to a dynamic or adapting threshold, optimizes the trade-off between spike generation (encoding) and fidelity (decoding). The threshold mimics a post-synaptic membrane (a low-pass filter) and serves as an internal decoder. Further, it sets the average firing rate (the energy constraint). The decoding process provides an internal copy of the coding error to the spike-generator which emits a spike when the error equals or exceeds a spike threshold. When optimized, the trade-off leads to a deterministic spike firing-rule that generates optimally timed spikes so as to maximize fidelity. The optimal coder is derived in closed-form in the limit of high spike-rates, when the signal can be approximated as a piece-wise constant signal. The predicted spike-times are close to those obtained experimentally in the primary electrosensory afferent neurons of weakly electric fish (Apteronotus leptorhynchus) and pyramidal neurons from the somatosensory cortex of the rat. We suggest that KCNQ/Kv7 channels (underlying the M-current) are good candidates for the decoder. They are widely coupled to metabolic processes and do not inactivate. We conclude that the neural threshold is optimized to generate an energy-efficient and high-fidelity neural code. PMID:26082710

  19. Geant4 Computing Performance Benchmarking and Monitoring

    DOE PAGES

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; ...

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

  20. Size-tunable band alignment and optoelectronic properties of transition metal dichalcogenide van der Waals heterostructures

    NASA Astrophysics Data System (ADS)

    Zhao, Yipeng; Yu, Wangbing; Ouyang, Gang

    2018-01-01

    2D transition metal dichalcogenide (TMDC)-based heterostructures exhibit several fascinating properties that can address the emerging market of energy conversion and storage devices. Current achievements show that the vertical stacked TMDC heterostructures can form type II band alignment and possess significant optoelectronic properties. However, a detailed analytical understanding of how to quantify the band alignment and band offset as well as the optimized power conversion efficiency (PCE) is still lacking. Herein, we propose an analytical model to exhibit the PCEs of TMDC van der Waals (vdW) heterostructures and explore the intrinsic mechanism of photovoltaic conversion based on the detailed balance principle and atomic-bond-relaxation correlation mechanism. We find that the PCE of monolayer MoS2/WSe2 can be up to 1.70%, and that of the MoS2/WSe2 vdW heterostructures increases with thickness, owing to increasing optical absorption. Moreover, the results are validated by comparing them with the available evidence, providing realistic efficiency targets and design principles. Highlights • Both electronic and optoelectronic models are developed for vertical stacked MoS2/WSe2 heterostructures. • The underlying mechanism on size effect of electronic and optoelectronic properties for vertical stacked MoS2/WSe2 heterostructures is clarified. • The macroscopically measurable quantities and the microscopical bond identities are connected.

  1. Progress and process improvements for multiple electron-beam direct write

    NASA Astrophysics Data System (ADS)

    Servin, Isabelle; Pourteau, Marie-Line; Pradelles, Jonathan; Essomba, Philippe; Lattard, Ludovic; Brandt, Pieter; Wieland, Marco

    2017-06-01

    Massively parallel electron beam direct write (MP-EBDW) lithography is a cost-effective patterning solution, complementary to optical lithography, for a variety of applications ranging from 200 to 14 nm. This paper will present last process/integration results to achieve targets for both 28 and 45 nm nodes. For 28 nm node, we mainly focus on line-width roughness (LWR) mitigation by playing with stack, new resist platform and bias design strategy. The lines roughness was reduced by using thicker spin-on-carbon (SOC) hardmask (-14%) or non-chemically amplified (non-CAR) resist with bias writing strategy implementation (-20%). Etch transfer into trilayer has been demonstrated by preserving pattern fidelity and profiles for both CAR and non-CAR resists. For 45 nm node, we demonstrate the electron-beam process integration within optical CMOS flows. Resists based on KrF platform show a full compatibility with multiple stacks to fit with conventional optical flow used for critical layers. Electron-beam resist performances have been optimized to fit the specifications in terms of resolution, energy latitude, LWR and stack compatibility. The patterning process overview showing the latest achievements is mature enough to enable starting the multi-beam technology pre-production mode.

  2. STGSTK- PREDICTING MULTISTAGE AXIAL-FLOW COMPRESSOR PERFORMANCE BY A MEANLINE STAGE-STACKING METHOD

    NASA Technical Reports Server (NTRS)

    Steinke, R. J.

    1994-01-01

    The STGSTK computer program was developed for predicting the off-design performance of multistage axial-flow compressors. The axial-flow compressor is widely used in aircraft engines. In addition to its inherent advantage of high mass flow per frontal area, it can exhibit very good aerodynamic performance. However, good aerodynamic performance over an acceptable range of operating conditions is not easily attained. STGSTK provides an analytical tool for the development of new compressor designs. The simplicity of a one-dimensional compressible flow model enables the stage-stacking method used in STGSTK to have excellent convergence properties and short computer run times. Also, the simplicity of the model makes STGSTK a manageable code that eases the incorporation, or modification, of empirical correlations directly linked to test data. Thus, the user can adapt the code to meet varying design needs. STGSTK uses a meanline stage-stacking method to predict off-design performance. Stage and cumulative compressor performance is calculated from representative meanline velocity diagrams located at rotor inlet and outlet meanline radii. STGSTK includes options for the following: 1) non-dimensional stage characteristics may be input directly or calculated from stage design performance input, 2) stage characteristics may be modified for off-design speed and blade reset, and 3) rotor design deviation angle may be modified for off-design flow, speed, and blade setting angle. Many of the code's options use correlations that are normally obtained from experimental data. The STGSTK user may modify these correlations as needed. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 85K of 8 bit bytes. STGSTK was developed in 1982.

  3. Lattice surgery on the Raussendorf lattice

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Paler, Alexandru; Devitt, Simon J.; Nori, Franco

    2018-07-01

    Lattice surgery is a method to perform quantum computation fault-tolerantly by using operations on boundary qubits between different patches of the planar code. This technique allows for universal planar code computation without eliminating the intrinsic two-dimensional nearest-neighbor properties of the surface code that eases physical hardware implementations. Lattice surgery approaches to algorithmic compilation and optimization have been demonstrated to be more resource efficient for resource-intensive components of a fault-tolerant algorithm, and consequently may be preferable over braid-based logic. Lattice surgery can be extended to the Raussendorf lattice, providing a measurement-based approach to the surface code. In this paper we describe how lattice surgery can be performed on the Raussendorf lattice and therefore give a viable alternative to computation using braiding in measurement-based implementations of topological codes.

  4. Optimizing zonal advection of the Advanced Research WRF (ARW) dynamics for Intel MIC

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Weather Research and Forecast (WRF) model is the most widely used community weather forecast and research model in the world. There are two distinct varieties of WRF. The Advanced Research WRF (ARW) is an experimental, advanced research version featuring very high resolution. The WRF Nonhydrostatic Mesoscale Model (WRF-NMM) has been designed for forecasting operations. WRF consists of dynamics code and several physics modules. The WRF-ARW core is based on an Eulerian solver for the fully compressible nonhydrostatic equations. In the paper, we will use Intel Intel Many Integrated Core (MIC) architecture to substantially increase the performance of a zonal advection subroutine for optimization. It is of the most time consuming routines in the ARW dynamics core. Advection advances the explicit perturbation horizontal momentum equations by adding in the large-timestep tendency along with the small timestep pressure gradient tendency. We will describe the challenges we met during the development of a high-speed dynamics code subroutine for MIC architecture. Furthermore, lessons learned from the code optimization process will be discussed. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 2.4x.

  5. Optimizing meridional advection of the Advanced Research WRF (ARW) dynamics for Intel Xeon Phi coprocessor

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    The most widely used community weather forecast and research model in the world is the Weather Research and Forecast (WRF) model. Two distinct varieties of WRF exist. The one we are interested is the Advanced Research WRF (ARW) is an experimental, advanced research version featuring very high resolution. The WRF Nonhydrostatic Mesoscale Model (WRF-NMM) has been designed for forecasting operations. WRF consists of dynamics code and several physics modules. The WRF-ARW core is based on an Eulerian solver for the fully compressible nonhydrostatic equations. In the paper, we optimize a meridional (north-south direction) advection subroutine for Intel Xeon Phi coprocessor. Advection is of the most time consuming routines in the ARW dynamics core. It advances the explicit perturbation horizontal momentum equations by adding in the large-timestep tendency along with the small timestep pressure gradient tendency. We will describe the challenges we met during the development of a high-speed dynamics code subroutine for MIC architecture. Furthermore, lessons learned from the code optimization process will be discussed. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.2x.

  6. Joint source-channel coding for motion-compensated DCT-based SNR scalable video.

    PubMed

    Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K

    2002-01-01

    In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.

  7. Improved Speech Coding Based on Open-Loop Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.

    2000-01-01

    A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.

  8. Development of an MRI-compatible digital SiPM detector stack for simultaneous PET/MRI

    PubMed Central

    Düppenbecker, Peter M; Weissler, Bjoern; Gebhardt, Pierre; Schug, David; Wehner, Jakob; Marsden, Paul K; Schulz, Volkmar

    2016-01-01

    Abstract Advances in solid-state photon detectors paved the way to combine positron emission tomography (PET) and magnetic resonance imaging (MRI) into highly integrated, truly simultaneous, hybrid imaging systems. Based on the most recent digital SiPM technology, we developed an MRI-compatible PET detector stack, intended as a building block for next generation simultaneous PET/MRI systems. Our detector stack comprises an array of 8 × 8 digital SiPM channels with 4 mm pitch using Philips Digital Photon Counting DPC 3200-22 devices, an FPGA for data acquisition, a supply voltage control system and a cooling infrastructure. This is the first detector design that allows the operation of digital SiPMs simultaneously inside an MRI system. We tested and optimized the MRI-compatibility of our detector stack on a laboratory test bench as well as in combination with a Philips Achieva 3 T MRI system. Our design clearly reduces distortions of the static magnetic field compared to a conventional design. The MRI static magnetic field causes weak and directional drift effects on voltage regulators, but has no direct impact on detector performance. MRI gradient switching initially degraded energy and timing resolution. Both distortions could be ascribed to voltage variations induced on the bias and the FPGA core voltage supply respectively. Based on these findings, we improved our detector design and our final design shows virtually no energy or timing degradations, even during heavy and continuous MRI gradient switching. In particular, we found no evidence that the performance of the DPC 3200-22 digital SiPM itself is degraded by the MRI system. PMID:28458919

  9. Two field trials for deblending of simultaneous source surveys: Why we failed and why we succeeded?

    NASA Astrophysics Data System (ADS)

    Zu, Shaohuan; Zhou, Hui; Chen, Haolin; Zheng, Hao; Chen, Yangkang

    2017-08-01

    Currently, deblending is the main strategy for dealing with the intense interference problem of simultaneous source data. Most deblending methods are based on the property that useful signal is coherent while the interference is incoherent in some domains other than common shot domain. In this paper, two simultaneous source field trials were studied in detail. In the first trial, the simultaneous source survey was not optimal, as the dithering code had strong coherency and the minimum distance between the two vessels was also small. The chosen marine shot scheduling and vessel deployment made it difficult to deblend the simultaneous source data, and result was an unexpected failure. Next, we tested different parameters (the dithering code and the minimum distance between vessels) of the simultaneous source survey using the simulated blended data and got some useful insights. Then, we carried out the second field trial with a carefully designed survey that was much different from the first trial. The deblended results in common receiver gather, common shot gather or the final stacked profile were encouraging. We obtained a complete success in the second field trial, which gave us confidence in the further test (such as a full three dimensional acquisition test or a high-resolution acquisition test with denser spatial sampling). Remembering that failures with simultaneous sourcing seldom reported, in this paper, our contribution is the discussion in detail about both our failed and successful field experiments and the lessons we have learned from them with the hope that the experience gained from this study can be very useful to other researchers in the same field.

  10. Targeting multiple heterogeneous hardware platforms with OpenCL

    NASA Astrophysics Data System (ADS)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware-specific optimizations as necessary.

  11. Measurements of the LHCb software stack on the ARM architecture

    NASA Astrophysics Data System (ADS)

    Vijay Kartik, S.; Couturier, Ben; Clemencic, Marco; Neufeld, Niko

    2014-06-01

    The ARM architecture is a power-efficient design that is used in most processors in mobile devices all around the world today since they provide reasonable compute performance per watt. The current LHCb software stack is designed (and thus expected) to build and run on machines with the x86/x86_64 architecture. This paper outlines the process of measuring the performance of the LHCb software stack on the ARM architecture - specifically, the ARMv7 architecture on Cortex-A9 processors from NVIDIA and on full-fledged ARM servers with chipsets from Calxeda - and makes comparisons with the performance on x86_64 architectures on the Intel Xeon L5520/X5650 and AMD Opteron 6272. The paper emphasises the aspects of performance per core with respect to the power drawn by the compute nodes for the given performance - this ensures a fair real-world comparison with much more 'powerful' Intel/AMD processors. The comparisons of these real workloads in the context of LHCb are also complemented with the standard synthetic benchmarks HEPSPEC and Coremark. The pitfalls and solutions for the non-trivial task of porting the source code to build for the ARMv7 instruction set are presented. The specific changes in the build process needed for ARM-specific portions of the software stack are described, to serve as pointers for further attempts taken up by other groups in this direction. Cases where architecture-specific tweaks at the assembler lever (both in ROOT and the LHCb software stack) were needed for a successful compile are detailed - these cases are good indicators of where/how the software stack as well as the build system can be made more portable and multi-arch friendly. The experience gained from the tasks described in this paper are intended to i) assist in making an informed choice about ARM-based server solutions as a feasible low-power alternative to the current compute nodes, and ii) revisit the software design and build system for portability and generic improvements.

  12. Demonstration of a full volume 3D pre-stack depth migration in the Garden Banks area using massively parallel processor (MPP) technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solano, M.; Chang, H.; VanDyke, J.

    1996-12-31

    This paper describes the implementation and results of portable, production-scale 3D Pre-stack Kirchhoff depth migration software. Full volume pre-stack imaging was applied to a six million trace (46.9 Gigabyte) data set from a subsalt play in the Garden Banks area in the Gulf of Mexico. The velocity model building and updating, were derived using image depth gathers and an image-driven strategy. After three velocity iterations, depth migrated sections revealed drilling targets that were not visible in the conventional 3D post-stack time migrated data set. As expected from the implementation of the migration algorithm, it was found that amplitudes are wellmore » preserved and anomalies associated with known reservoirs, conform to petrophysical predictions. Image gathers for velocity analysis and the final depth migrated volume, were generated on an 1824 node Intel Paragon at Sandia National Laboratories. The code has been successfully ported to a CRAY (T3D) and Unix workstation Parallel Virtual Machine environments (PVM).« less

  13. Energy-Efficient Next-Generation Passive Optical Networks Based on Sleep Mode and Heuristic Optimization

    NASA Astrophysics Data System (ADS)

    Zulai, Luis G. T.; Durand, Fábio R.; Abrão, Taufik

    2015-05-01

    In this article, an energy-efficiency mechanism for next-generation passive optical networks is investigated through heuristic particle swarm optimization. Ten-gigabit Ethernet-wavelength division multiplexing optical code division multiplexing-passive optical network next-generation passive optical networks are based on the use of a legacy 10-gigabit Ethernet-passive optical network with the advantage of using only an en/decoder pair of optical code division multiplexing technology, thus eliminating the en/decoder at each optical network unit. The proposed joint mechanism is based on the sleep-mode power-saving scheme for a 10-gigabit Ethernet-passive optical network, combined with a power control procedure aiming to adjust the transmitted power of the active optical network units while maximizing the overall energy-efficiency network. The particle swarm optimization based power control algorithm establishes the optimal transmitted power in each optical network unit according to the network pre-defined quality of service requirements. The objective is controlling the power consumption of the optical network unit according to the traffic demand by adjusting its transmitter power in an attempt to maximize the number of transmitted bits with minimum energy consumption, achieving maximal system energy efficiency. Numerical results have revealed that it is possible to save 75% of energy consumption with the proposed particle swarm optimization based sleep-mode energy-efficiency mechanism compared to 55% energy savings when just a sleeping-mode-based mechanism is deployed.

  14. Modeling and optimization of the air system in polymer exchange membrane fuel cell systems

    NASA Astrophysics Data System (ADS)

    Bao, Cheng; Ouyang, Minggao; Yi, Baolian

    Stack and air system are the two most important components in the fuel cell system (FCS). It is meaningful to study their properties and the trade-off between them. In this paper, a modified one-dimensional steady-state analytical fuel cell model is used. The logarithmic mean of the inlet and the outlet oxygen partial pressure is adopted to avoid underestimating the effect of air stoichiometry. And the pressure drop model in the grid-distributed flow field is included in the stack analysis. Combined with the coordinate change preprocessing and analog technique, neural network is used to treat the MAP of compressor and turbine in the air system. Three kinds of air system topologies, the pure screw compressor, serial booster and exhaust expander are analyzed in this article. A real-code genetic algorithm is programmed to obtain the global optimum air stoichiometric ratio and the cathode outlet pressure. It is shown that the serial booster and expander with the help of exhaust recycling, can improve more than 3% in the FCS efficiency comparing to the pure screw compressor. As the net power increases, the optimum cathode outlet pressure keeps rising and the air stoichiometry takes on the concave trajectory. The working zone of the proportional valve is also discussed. This presented work is helpful to the design of the air system in fuel cell system. The steady-state optimum can also be used in the dynamic control.

  15. Biomass-to-electricity: analysis and optimization of the complete pathway steam explosion--enzymatic hydrolysis--anaerobic digestion with ICE vs SOFC as biogas users.

    PubMed

    Santarelli, M; Barra, S; Sagnelli, F; Zitella, P

    2012-11-01

    The paper deals with the energy analysis and optimization of a complete biomass-to-electricity energy pathway, starting from raw biomass towards the production of renewable electricity. The first step (biomass-to-biogas) is based on a real pilot plant located in Environment Park S.p.A. (Torino, Italy) with three main steps ((1) impregnation; (2) steam explosion; (3) enzymatic hydrolysis), completed by a two-step anaerobic fermentation. In the second step (biogas-to-electricity), the paper considers two technologies: internal combustion engines and a stack of solid oxide fuel cells. First, the complete pathway has been modeled and validated through experimental data. After, the model has been used for an analysis and optimization of the complete thermo-chemical and biological process, with the objective function of maximization of the energy balance at minimum consumption. The comparison between ICE and SOFC shows the better performance of the integrated plants based on SOFC. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Variable Coded Modulation software simulation

    NASA Astrophysics Data System (ADS)

    Sielicki, Thomas A.; Hamkins, Jon; Thorsen, Denise

    This paper reports on the design and performance of a new Variable Coded Modulation (VCM) system. This VCM system comprises eight of NASA's recommended codes from the Consultative Committee for Space Data Systems (CCSDS) standards, including four turbo and four AR4JA/C2 low-density parity-check codes, together with six modulations types (BPSK, QPSK, 8-PSK, 16-APSK, 32-APSK, 64-APSK). The signaling protocol for the transmission mode is based on a CCSDS recommendation. The coded modulation may be dynamically chosen, block to block, to optimize throughput.

  17. Image-Based Single Cell Profiling: High-Throughput Processing of Mother Machine Experiments

    PubMed Central

    Sachs, Christian Carsten; Grünberger, Alexander; Helfrich, Stefan; Probst, Christopher; Wiechert, Wolfgang; Kohlheyer, Dietrich; Nöh, Katharina

    2016-01-01

    Background Microfluidic lab-on-chip technology combined with live-cell imaging has enabled the observation of single cells in their spatio-temporal context. The mother machine (MM) cultivation system is particularly attractive for the long-term investigation of rod-shaped bacteria since it facilitates continuous cultivation and observation of individual cells over many generations in a highly parallelized manner. To date, the lack of fully automated image analysis software limits the practical applicability of the MM as a phenotypic screening tool. Results We present an image analysis pipeline for the automated processing of MM time lapse image stacks. The pipeline supports all analysis steps, i.e., image registration, orientation correction, channel/cell detection, cell tracking, and result visualization. Tailored algorithms account for the specialized MM layout to enable a robust automated analysis. Image data generated in a two-day growth study (≈ 90 GB) is analyzed in ≈ 30 min with negligible differences in growth rate between automated and manual evaluation quality. The proposed methods are implemented in the software molyso (MOther machine AnaLYsis SOftware) that provides a new profiling tool to analyze unbiasedly hitherto inaccessible large-scale MM image stacks. Conclusion Presented is the software molyso, a ready-to-use open source software (BSD-licensed) for the unsupervised analysis of MM time-lapse image stacks. molyso source code and user manual are available at https://github.com/modsim/molyso. PMID:27661996

  18. From 1D to 3D: Tunable Sub-10 nm Gaps in Large Area Devices.

    PubMed

    Zhou, Ziwei; Zhao, Zhiyuan; Yu, Ye; Ai, Bin; Möhwald, Helmuth; Chiechi, Ryan C; Yang, Joel K W; Zhang, Gang

    2016-04-20

    Tunable sub-10 nm 1D nanogaps are fabricated based on nanoskiving. The electric field in different sized nanogaps is investigated theoretically and experimentally, yielding nonmonotonic dependence and an optimized gap-width (5 nm). 2D nanogap arrays are fabricated to pack denser gaps combining surface patterning techniques. Innovatively, 3D multistory nanogaps are built via a stacking procedure, processing higher integration, and much improved electric field. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Optical trapping performance of dielectric-metallic patchy particles

    PubMed Central

    Lawson, Joseph L.; Jenness, Nathan J.; Clark, Robert L.

    2015-01-01

    We demonstrate a series of simulation experiments examining the optical trapping behavior of composite micro-particles consisting of a small metallic patch on a spherical dielectric bead. A full parameter space of patch shapes, based on current state of the art manufacturing techniques, and optical properties of the metallic film stack is examined. Stable trapping locations and optical trap stiffness of these particles are determined based on the particle design and potential particle design optimizations are discussed. A final test is performed examining the ability to incorporate these composite particles with standard optical trap metrology technologies. PMID:26832054

  20. Scalable Rapidly Deployable Convex Optimization for Data Analytics

    DTIC Science & Technology

    SOCPs , SDPs, exponential cone programs, and power cone programs. CVXPY supports basic methods for distributed optimization, on...multiple heterogenous platforms. We have also done basic research in various application areas , using CVXPY , to demonstrate its usefulness. See attached report for publication information....Over the period of the contract we have developed the full stack for wide use of convex optimization, in machine learning and many other areas .

  1. Liquid-phase-deposited siloxane-based capping layers for silicon solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veith-Wolf, Boris; Wang, Jianhui; Hannu-Kuure, Milja

    2015-02-02

    We apply non-vacuum processing to deposit dielectric capping layers on top of ultrathin atomic-layer-deposited aluminum oxide (AlO{sub x}) films, used for the rear surface passivation of high-efficiency crystalline silicon solar cells. We examine various siloxane-based liquid-phase-deposited (LPD) materials. Our optimized AlO{sub x}/LPD stacks show an excellent thermal and chemical stability against aluminum metal paste, as demonstrated by measured surface recombination velocities below 10 cm/s on 1.3 Ωcm p-type silicon wafers after firing in a belt-line furnace with screen-printed aluminum paste on top. Implementation of the optimized LPD layers into an industrial-type screen-printing solar cell process results in energy conversion efficiencies ofmore » up to 19.8% on p-type Czochralski silicon.« less

  2. Atomistic Simulations of Surface Cross-Slip Nucleation in Face-Centered Cubic Nickel and Copper (Postprint)

    DTIC Science & Technology

    2013-02-15

    molecular dynamics code, LAMMPS [9], developed at Sandia National Laboratory. The simulation cell is a rectangular parallelepiped, with the z-axis...with assigned energies within LAMMPs of greater than 4.42 eV (Ni) or 3.52 eV (Cu) (the energy of atoms in the stacking fault region), the partial...molecular dynamics code LAMMPS , which was developed at Sandia National Laboratory by Dr. Steve Plimpton and co-workers. This work was supported by the

  3. Design and operation of interconnectors for solid oxide fuel cell stacks

    NASA Astrophysics Data System (ADS)

    Winkler, W.; Koeppen, J.

    Highly efficient combined cycles with solid oxide fuel cell (SOFC) need an integrated heat exchanger in the stack to reach efficiencies of about 80%. The stack costs must be lower than 1000 DM/kW. A newly developed welded metallic (Haynes HA 230) interconnector with a free stretching planar SOFC and an integrated heat exchanger was tested in thermal cycling operation. The design allowed a cycling of the SOFC without mechanical damage of the electrolyte in several tests. However, more tests and a further design optimization will be necessary. These results could indicate that commercial high-temperature alloys can be used as interconnector material in order to fullfil the cost requirements.

  4. FocusStack and StimServer: a new open source MATLAB toolchain for visual stimulation and analysis of two-photon calcium neuronal imaging data.

    PubMed

    Muir, Dylan R; Kampa, Björn M

    2014-01-01

    Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories.

  5. FocusStack and StimServer: a new open source MATLAB toolchain for visual stimulation and analysis of two-photon calcium neuronal imaging data

    PubMed Central

    Muir, Dylan R.; Kampa, Björn M.

    2015-01-01

    Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories1. PMID:25653614

  6. Experimental and numerical analysis of interlocking rib formation at sheet metal blanking

    NASA Astrophysics Data System (ADS)

    Bolka, Špela; Bratuš, Vitoslav; Starman, Bojan; Mole, Nikolaj

    2018-05-01

    Cores for electrical motors are typically produced by blanking of laminations and then stacking them together, with, for instance, interlocking ribs or welding. Strict geometrical tolerances, both on the lamination and on the stack, combined with complex part geometry and harder steel strip material, call for use of predictive methods to optimize the process before actual blanking to reduce the costs and speed up the process. One of the major influences on the final stack geometry is the quality of the interlocking ribs. A rib is formed in one step and joined with the rib of the preceding lamination in the next. The quality of the joint determines the firmness of the stack and also influences its. The geometrical and positional accuracy is thus crucial in rib formation process. In this study, a complex experimental and numerical analysis of interlocking rib formation has been performed. The aim of the analysis is to numerically predict the shape of the rib in order to perform a numerical simulation of the stack formation in the next step of the process. A detailed experimental research has been performed in order to characterize influential parameters on the rib formation and the geometry of the ribs itself, using classical and 3D laser microscopy. The formation of the interlocking rib is then simulated using Abaqus Explicit. The Hilll 48 constitutive material model is based on extensive and novel material characterization process, combining data from in-plane and out-of-plane material tests to perform a 3D analysis of both, rib formation and rib joining. The study shows good correlation between the experimental and numerical results.

  7. Designing Thin, Ultrastretchable Electronics with Stacked Circuits and Elastomeric Encapsulation Materials.

    PubMed

    Xu, Renxiao; Lee, Jung Woo; Pan, Taisong; Ma, Siyi; Wang, Jiayi; Han, June Hyun; Ma, Yinji; Rogers, John A; Huang, Yonggang

    2017-01-26

    Many recently developed soft, skin-like electronics with high performance circuits and low modulus encapsulation materials can accommodate large bending, stretching, and twisting deformations. Their compliant mechanics also allows for intimate, nonintrusive integration to the curvilinear surfaces of soft biological tissues. By introducing a stacked circuit construct, the functional density of these systems can be greatly improved, yet their desirable mechanics may be compromised due to the increased overall thickness. To address this issue, the results presented here establish design guidelines for optimizing the deformable properties of stretchable electronics with stacked circuit layers. The effects of three contributing factors (i.e., the silicone inter-layer, the composite encapsulation, and the deformable interconnects) on the stretchability of a multilayer system are explored in detail via combined experimental observation, finite element modeling, and theoretical analysis. Finally, an electronic module with optimized design is demonstrated. This highly deformable system can be repetitively folded, twisted, or stretched without observable influences to its electrical functionality. The ultrasoft, thin nature of the module makes it suitable for conformal biointegration.

  8. Designing Thin, Ultrastretchable Electronics with Stacked Circuits and Elastomeric Encapsulation Materials

    PubMed Central

    Xu, Renxiao; Lee, Jung Woo; Pan, Taisong; Ma, Siyi; Wang, Jiayi; Han, June Hyun; Ma, Yinji

    2017-01-01

    Many recently developed soft, skin-like electronics with high performance circuits and low modulus encapsulation materials can accommodate large bending, stretching, and twisting deformations. Their compliant mechanics also allows for intimate, nonintrusive integration to the curvilinear surfaces of soft biological tissues. By introducing a stacked circuit construct, the functional density of these systems can be greatly improved, yet their desirable mechanics may be compromised due to the increased overall thickness. To address this issue, the results presented here establish design guidelines for optimizing the deformable properties of stretchable electronics with stacked circuit layers. The effects of three contributing factors (i.e., the silicone inter-layer, the composite encapsulation, and the deformable interconnects) on the stretchability of a multilayer system are explored in detail via combined experimental observation, finite element modeling, and theoretical analysis. Finally, an electronic module with optimized design is demonstrated. This highly deformable system can be repetitively folded, twisted, or stretched without observable influences to its electrical functionality. The ultrasoft, thin nature of the module makes it suitable for conformal biointegration. PMID:29046624

  9. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    NASA Astrophysics Data System (ADS)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  10. Operational Research: Evaluating Multimodel Implementations for 24/7 Runtime Environments

    NASA Astrophysics Data System (ADS)

    Burkhart, J. F.; Helset, S.; Abdella, Y. S.; Lappegard, G.

    2016-12-01

    We present a new open source framework for operational hydrologic rainfall-runoff modeling. The Statkraft Hydrologic Forecasting Toolbox (Shyft) is unique from existing frameworks in that two primary goals are to provide: i) modern, professionally developed source code, and ii) a platform that is robust and ready for operational deployment. Developed jointly between Statkraft AS and The University of Oslo, the framework is currently in operation in both private and academic environments. The hydrology presently available in the distribution is simple and proven. Shyft provides a platform for distributed hydrologic modeling in a highly efficient manner. In it's current operational deployment at Statkraft, Shyft is used to provide daily 10-day forecasts for critical reservoirs. In a research setting, we have developed a novel implementation of the SNICAR model to assess the impact of aerosol deposition on snow packs. Several well known rainfall-runoff algorithms are available for use, allowing for intercomparing different approaches based on available data and the geographical environment. The well known HBV model is a default option, and other routines with more localized methods handling snow and evapotranspiration, or simplifications of catchment scale processes are included. For the latter, we have implemented the Kirchner response routine. Being developed in Norway, a variety snow-melt routines, including simplified degree day models or more advanced energy balance models, may be selected. Ensemble forecasts, multi-model implementations, and statistical post-processing routines enable a robust toolbox for investigating optimal model configurations in an operational setting. The Shyft core is written in modern templated C++ and has Python wrappers developed for easy access to module sub-routines. The code is developed such that the modules that make up a "method stack" are easy to modify and customize, allowing one to create new methods and test them rapidly. Due to the simple architecture and ease of access to the module routines, we see Shyft as an optimal choice to evaluate new hydrologic routines in an environment requiring robust, professionally developed software and welcome further community participation.

  11. Mathematical modeling of sample stacking methods in microfluidic systems

    NASA Astrophysics Data System (ADS)

    Horek, Jon

    Gradient focusing methods are a general class of experimental techniques used to simultaneously separate and increase the cross-sectionally averaged concentration of charged particle mixtures. In comparison, Field Amplified Sample Stacking (FASS) techniques first concentrate the collection of molecules before separating them. Together, we denote gradient focusing and FASS methods "sample stacking" and study the dynamics of a specific method, Temperature Gradient Focusing (TGF), in which an axial temperature gradient is applied along a channel filled with weak buffer. Gradients in electroosmotic fluid flow and electrophoretic species velocity create the simultaneous separating and concentrating mechanism mentioned above. In this thesis, we begin with the observation that very little has been done to model the dynamics of gradient focusing, and proceed to solve the fundamental equations of fluid mechanics and scalar transport, assuming the existence of slow axial variations and the Taylor-Aris dispersion coefficient. In doing so, asymptotic methods reduce the equations from 3D to 1D, and we arrive at a simple 1D model which can be used to predict the transient evolution of the cross-sectionally averaged analyte concentration. In the second half of this thesis, we run several numerical focusing experiments with a 3D finite volume code. Comparison of the 1D theory and 3D simulations illustrates not only that the asymptotic theory converges as a certain parameter tends to zero, but also that fairly large axial slip velocity gradients lead to quite small errors in predicted steady variance. Additionally, we observe that the axial asymmetry of the electrophoretic velocity model leads to asymmetric peak shapes, a violation of the symmetric Gaussians predicted by the 1D theory. We conclude with some observations on the effect of Peclet number and gradient strength on the performance of focusing experiments, and describe a method for experimental optimization. Such knowledge is useful for design of lab-on-a-chip devices.

  12. A Clustering-Based Approach to Enriching Code Foraging Environment.

    PubMed

    Niu, Nan; Jin, Xiaoyu; Niu, Zhendong; Cheng, Jing-Ru C; Li, Ling; Kataev, Mikhail Yu

    2016-09-01

    Developers often spend valuable time navigating and seeking relevant code in software maintenance. Currently, there is a lack of theoretical foundations to guide tool design and evaluation to best shape the code base to developers. This paper contributes a unified code navigation theory in light of the optimal food-foraging principles. We further develop a novel framework for automatically assessing the foraging mechanisms in the context of program investigation. We use the framework to examine to what extent the clustering of software entities affects code foraging. Our quantitative analysis of long-lived open-source projects suggests that clustering enriches the software environment and improves foraging efficiency. Our qualitative inquiry reveals concrete insights into real developer's behavior. Our research opens the avenue toward building a new set of ecologically valid code navigation tools.

  13. Mechanics of Platelet-Matrix Composites across Scales: Theory, Multiscale Modeling, and 3D Fabrication

    NASA Astrophysics Data System (ADS)

    Sakhavand, Navid

    Many natural and biomimetic composites - such as nacre, silk and clay-polymer - exhibit a remarkable balance of strength, toughness, and/or stiffness, which call for a universal measure to quantify this outstanding feature given the platelet-matrix structure and material characteristics of the constituents. Analogously, there is an urgent need to quantify the mechanics of emerging electronic and photonic systems such as stacked heterostructures, which are composed of strong in-plane bonding networks but weak interplanar bonding matrices. In this regard, development of a universal composition-structure-property map for natural platelet-matrix composites, and stacked heterostructures opens up new doors for designing materials with superior mechanical performance. In this dissertation, a multiscale bottom-up approach is adopted to analyze and predict the mechanical properties of platelet-matrix composites. Design guidelines are provided by developing universally valid (across different length scales) diagrams for science-based engineering of numerous natural and synthetic platelet-matrix composites and stacked heterostructures while significantly broadening the spectrum of strategies for fabricating new composites with specific and optimized mechanical properties. First, molecular dynamics simulations are utilized to unravel the fundamental underlying physics and chemistry of the binding nature at the atomic-level interface of organic-inorganic composites. Polymer-cementitious composites are considered as case studies to understand bonding mechanism at the nanoscale and open up new venues for potential mechanical enhancement at the macro-scale. Next, sophisticated mathematical derivations based on elasticity and plasticity theories are presented to describe pre-crack (intrinsic) mechanical performance of platelet-matrix composites at the microscale. These derivations lead to developing a unified framework to construct series of universal composition-structure-property maps that decode the interplay between various geometries and inherent material features, encapsulated in a few dimensionless parameters. Finally, after crack mechanical properties (extrinsic) of platelet-matrix composites until ultimate failure of the material at the macroscale is investigated via combinatorial finite element simulations. The effect of different composition-structure-property parameters on mechanical properties synergies are depicted via 2D and 3D maps. 3D-printed specimens are fabricated and tested against the theoretical prediction. The combination of the presented diagrams and guidelines paves the path toward platelet-matrix composites and stacked-heterostructures with superior and optimized mechanical properties.

  14. Towards radiation hard converter material for SiC-based fast neutron detectors

    NASA Astrophysics Data System (ADS)

    Tripathi, S.; Upadhyay, C.; Nagaraj, C. P.; Venkatesan, A.; Devan, K.

    2018-05-01

    In the present work, Geant4 Monte-Carlo simulations have been carried out to study the neutron detection efficiency of the various neutron to other charge particle (recoil proton) converter materials. The converter material is placed over Silicon Carbide (SiC) in Fast Neutron detectors (FNDs) to achieve higher neutron detection efficiency as compared to bare SiC FNDs. Hydrogenous converter material such as High-Density Polyethylene (HDPE) is preferred over other converter materials due to the virtue of its high elastic scattering reaction cross-section for fast neutron detection at room temperature. Upon interaction with fast neutrons, hydrogenous converter material generates recoil protons which liberate e-hole pairs in the active region of SiC detector to provide a detector signal. The neutron detection efficiency offered by HDPE converter is compared with several other hydrogenous materials viz., 1) Lithium Hydride (LiH), 2) Perylene, 3) PTCDA . It is found that, HDPE, though providing highest efficiency among various studied materials, cannot withstand high temperature and harsh radiation environment. On the other hand, perylene and PTCDA can sustain harsh environments, but yields low efficiency. The analysis carried out reveals that LiH is a better material for neutron to other charge particle conversion with competent efficiency and desired radiation hardness. Further, the thickness of LiH has also been optimized for various mono-energetic neutron beams and Am-Be neutron source generating a neutron fluence of 109 neutrons/cm2. The optimized thickness of LiH converter for fast neutron detection is found to be ~ 500 μm. However, the estimated efficiency for fast neutron detection is only 0.1%, which is deemed to be inadequate for reliable detection of neutrons. A sensitivity study has also been done investigating the gamma background effect on the neutron detection efficiency for various energy threshold of Low-Level Discriminator (LLD). The detection efficiency of a stacked structure concept has been explored by juxtaposing several converter-detector layers to improve the efficiency of LiH-SiC-based FNDs . It is observed that approximately tenfold efficiency improvement has been achieved—0.93% for ten layers stacked configuration vis-à-vis 0.1% of single converter-detector layer configuration. Finally, stacked detectors have also been simulated for different converter thicknesses to attain the efficiency as high as ~ 3.25% with the help of 50 stacked layers.

  15. Expansion of the Genetic Alphabet: A Chemist's Approach to Synthetic Biology.

    PubMed

    Feldman, Aaron W; Romesberg, Floyd E

    2018-02-20

    The information available to any organism is encoded in a four nucleotide, two base pair genetic code. Since its earliest days, the field of synthetic biology has endeavored to impart organisms with novel attributes and functions, and perhaps the most fundamental approach to this goal is the creation of a fifth and sixth nucleotide that pair to form a third, unnatural base pair (UBP) and thus allow for the storage and retrieval of increased information. Achieving this goal, by definition, requires synthetic chemistry to create unnatural nucleotides and a medicinal chemistry-like approach to guide their optimization. With this perspective, almost 20 years ago we began designing unnatural nucleotides with the ultimate goal of developing UBPs that function in vivo, and thus serve as the foundation of semi-synthetic organisms (SSOs) capable of storing and retrieving increased information. From the beginning, our efforts focused on the development of nucleotides that bear predominantly hydrophobic nucleobases and thus that pair not based on the complementary hydrogen bonds that are so prominent among the natural base pairs but rather via hydrophobic and packing interactions. It was envisioned that such a pairing mechanism would provide a basal level of selectivity against pairing with natural nucleotides, which we expected would be the greatest challenge; however, this choice mandated starting with analogs that have little or no homology to their natural counterparts and that, perhaps not surprisingly, performed poorly. Progress toward their optimization was driven by the construction of structure-activity relationships, initially from in vitro steady-state kinetic analysis, then later from pre-steady-state and PCR-based assays, and ultimately from performance in vivo, with the results augmented three times with screens that explored combinations of the unnatural nucleotides that were too numerous to fully characterize individually. The structure-activity relationship data identified multiple features required by the UBP, and perhaps most prominent among them was a substituent ortho to the glycosidic linkage that is capable of both hydrophobic packing and hydrogen bonding, and nucleobases that stably stack with flanking natural nucleobases in lieu of the potentially more stabilizing stacking interactions afforded by cross strand intercalation. Most importantly, after the examination of hundreds of unnatural nucleotides and thousands of candidate UBPs, the efforts ultimately resulted in the identification of a family of UBPs that are well recognized by DNA polymerases when incorporated into DNA and that have been used to create SSOs that store and retrieve increased information. In addition to achieving a longstanding goal of synthetic biology, the results have important implications for our understanding of both the molecules and forces that can underlie biological processes, so long considered the purview of molecules benefiting from eons of evolution, and highlight the promise of applying the approaches and methodologies of synthetic and medical chemistry in the pursuit of synthetic biology.

  16. Study of information transfer optimization for communication satellites

    NASA Technical Reports Server (NTRS)

    Odenwalder, J. P.; Viterbi, A. J.; Jacobs, I. M.; Heller, J. A.

    1973-01-01

    The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described.

  17. Investigation of Ruthenium Dissolution in Advanced Membrane Electrode Assemblies for Direct Methanol Based Fuel Cells Stacks

    NASA Technical Reports Server (NTRS)

    Valdez, T. I.; Firdosy, S.; Koel, B. E.; Narayanan, S. R.

    2005-01-01

    This viewgraph presentation gives a detailed review of the Direct Methanol Based Fuel Cell (DMFC) stack and investigates the Ruthenium that was found at the exit of the stack. The topics include: 1) Motivation; 2) Pathways for Cell Degradation; 3) Cell Duration Testing; 4) Duration Testing, MEA Analysis; and 5) Stack Degradation Analysis.

  18. Circular codes revisited: a statistical approach.

    PubMed

    Gonzalez, D L; Giannerini, S; Rosa, R

    2011-04-21

    In 1996 Arquès and Michel [1996. A complementary circular code in the protein coding genes. J. Theor. Biol. 182, 45-58] discovered the existence of a common circular code in eukaryote and prokaryote genomes. Since then, circular code theory has provoked great interest and underwent a rapid development. In this paper we discuss some theoretical issues related to the synchronization properties of coding sequences and circular codes with particular emphasis on the problem of retrieval and maintenance of the reading frame. Motivated by the theoretical discussion, we adopt a rigorous statistical approach in order to try to answer different questions. First, we investigate the covering capability of the whole class of 216 self-complementary, C(3) maximal codes with respect to a large set of coding sequences. The results indicate that, on average, the code proposed by Arquès and Michel has the best covering capability but, still, there exists a great variability among sequences. Second, we focus on such code and explore the role played by the proportion of the bases by means of a hierarchy of permutation tests. The results show the existence of a sort of optimization mechanism such that coding sequences are tailored as to maximize or minimize the coverage of circular codes on specific reading frames. Such optimization clearly relates the function of circular codes with reading frame synchronization. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. A 1 kWel thermoelectric stack for geothermal power generation - Modeling and geometrical optimization

    NASA Astrophysics Data System (ADS)

    Suter, C.; Jovanovic, Z.; Steinfeld, A.

    2012-06-01

    A thermoelectric stack composed of arrays of Bi-Te alloy thermoelectric converter (TEC) modules is considered for geothermal heat conversion. The TEC modules consist of Al2O3 plates with surface 30×30 mm2 and 127 p-type (Bi0.2Sb0.8)2Te3 and n-type Bi2(Te0.96Se0.04)3 thermoelement pairs, each having a cross-section of 1.05×1.05 mm2, and with a figure-of-merit of 1 and a heat-to-electricity conversion efficiency of ˜5%. A heat transfer model is formulated to couple conduction in the thermoelements with convection between the Al2O3 plates and the water flow in counter-flow channel configuration. The calculated open-circuit voltages are compared to those resulting from the mean temperature differences across the TEC modules computed by CFD. The investigated parameters are: hot water inlet and outlet temperatures (373 - 413 K and 323 - 363 K, respectively), stack length (300 - 1500 mm), thermoelement length (1 - 4 mm) and hot channel heights (0.2 - 2 mm). The heat transfer model is then applied to optimize a 1 kWel stack with hot water inlet at 393 K and outlet at 353 K for either maximum heat-to-electricity conversion efficiency of 2.9% or minimum size of 0.0044 m3.

  20. Helium: lifting high-performance stencil kernels from stripped x86 binaries to halide DSL code

    DOE PAGES

    Mendis, Charith; Bosboom, Jeffrey; Wu, Kevin; ...

    2015-06-03

    Highly optimized programs are prone to bit rot, where performance quickly becomes suboptimal in the face of new hardware and compiler techniques. In this paper we show how to automatically lift performance-critical stencil kernels from a stripped x86 binary and generate the corresponding code in the high-level domain-specific language Halide. Using Halide's state-of-the-art optimizations targeting current hardware, we show that new optimized versions of these kernels can replace the originals to rejuvenate the application for newer hardware. The original optimized code for kernels in stripped binaries is nearly impossible to analyze statically. Instead, we rely on dynamic traces to regeneratemore » the kernels. We perform buffer structure reconstruction to identify input, intermediate and output buffer shapes. Here, we abstract from a forest of concrete dependency trees which contain absolute memory addresses to symbolic trees suitable for high-level code generation. This is done by canonicalizing trees, clustering them based on structure, inferring higher-dimensional buffer accesses and finally by solving a set of linear equations based on buffer accesses to lift them up to simple, high-level expressions. Helium can handle highly optimized, complex stencil kernels with input-dependent conditionals. We lift seven kernels from Adobe Photoshop giving a 75 % performance improvement, four kernels from Irfan View, leading to 4.97 x performance, and one stencil from the mini GMG multigrid benchmark netting a 4.25 x improvement in performance. We manually rejuvenated Photoshop by replacing eleven of Photoshop's filters with our lifted implementations, giving 1.12 x speedup without affecting the user experience.« less

  1. Atomistically determined phase-field modeling of dislocation dissociation, stacking fault formation, dislocation slip, and reactions in fcc systems

    NASA Astrophysics Data System (ADS)

    Rezaei Mianroodi, Jaber; Svendsen, Bob

    2015-04-01

    The purpose of the current work is the development of a phase field model for dislocation dissociation, slip and stacking fault formation in single crystals amenable to determination via atomistic or ab initio methods in the spirit of computational material design. The current approach is based in particular on periodic microelasticity (Wang and Jin, 2001; Bulatov and Cai, 2006; Wang and Li, 2010) to model the strongly non-local elastic interaction of dislocation lines via their (residual) strain fields. These strain fields depend in turn on phase fields which are used to parameterize the energy stored in dislocation lines and stacking faults. This energy storage is modeled here with the help of the "interface" energy concept and model of Cahn and Hilliard (1958) (see also Allen and Cahn, 1979; Wang and Li, 2010). In particular, the "homogeneous" part of this energy is related to the "rigid" (i.e., purely translational) part of the displacement of atoms across the slip plane, while the "gradient" part accounts for energy storage in those regions near the slip plane where atomic displacements deviate from being rigid, e.g., in the dislocation core. Via the attendant global energy scaling, the interface energy model facilitates an atomistic determination of the entire phase field energy as an optimal approximation of the (exact) atomistic energy; no adjustable parameters remain. For simplicity, an interatomic potential and molecular statics are employed for this purpose here; alternatively, ab initio (i.e., DFT-based) methods can be used. To illustrate the current approach, it is applied to determine the phase field free energy for fcc aluminum and copper. The identified models are then applied to modeling of dislocation dissociation, stacking fault formation, glide and dislocation reactions in these materials. As well, the tensile loading of a dislocation loop is considered. In the process, the current thermodynamic picture is compared with the classical mechanical one as based on the Peach-Köhler force.

  2. A novel neutron energy spectrum unfolding code using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-07-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code.

  3. scarlet: Source separation in multi-band images by Constrained Matrix Factorization

    NASA Astrophysics Data System (ADS)

    Melchior, Peter; Moolekamp, Fred; Jerdee, Maximilian; Armstrong, Robert; Sun, Ai-Lei; Bosch, James; Lupton, Robert

    2018-03-01

    SCARLET performs source separation (aka "deblending") on multi-band images. It is geared towards optical astronomy, where scenes are composed of stars and galaxies, but it is straightforward to apply it to other imaging data. Separation is achieved through a constrained matrix factorization, which models each source with a Spectral Energy Distribution (SED) and a non-parametric morphology, or multiple such components per source. The code performs forced photometry (with PSF matching if needed) using an optimal weight function given by the signal-to-noise weighted morphology across bands. The approach works well if the sources in the scene have different colors and can be further strengthened by imposing various additional constraints/priors on each source. Because of its generic utility, this package provides a stand-alone implementation that contains the core components of the source separation algorithm. However, the development of this package is part of the LSST Science Pipeline; the meas_deblender package contains a wrapper to implement the algorithms here for the LSST stack.

  4. Extension of activation cross-section data of deuteron induced nuclear reactions on cadmium up to 50 MeV

    NASA Astrophysics Data System (ADS)

    Hermanne, A.; Tárkányi, F.; Takács, S.; Ditrói, F.

    2016-10-01

    The excitation functions for 109,110g,111m+g,113m,114m,115mIn, 107,109,115m,115gCd and 105g,106m,110g,111Ag are presented for stacked foil irradiations on natCd targets in the 49-33 MeV deuteron energy domain. Reduced uncertainty is obtained by determining incident particle flux and energy scale relative to re-measured monitor reactions natAl(d,x)22,24Na. The results were compared to our earlier studies on natCd and on enriched 112Cd targets. The merit of the values predicted by the TALYS 1.6 code (resulting from a weighted combination of reaction cross-section data on all stable Cd isotopes as available in the on-line libraries TENDL-2014 and TENDL-2015) is discussed. Influence on optimal production routes for several radionuclides with practical applications (111In, 114mIn, 115Cd, 109,107Cd….) is reviewed.

  5. Regenerative fuel cell study for satellites in GEO orbit

    NASA Technical Reports Server (NTRS)

    Levy, Alexander; Vandine, Leslie L.; Stedman, James K.

    1987-01-01

    Summarized are the results of a 12-month study to identify high performance regenerative hydrogen-oxygen fuel cell concepts for geosynchronous satellite application. Emphasis was placed on concepts with the potential for high energy density (W-hr/lb) and passive means for water and heat management to maximize system reliability. Both polymer membrane and alkaline electrolyte fuel cells were considered, with emphasis on the alkaline cell because of its high performance, advanced state of development, and proven ability to operate in a launch and space environment. Three alkaline system concepts were studied. The first, the integrated design, utilized a configuration in which the fuel cell and electrolysis cells are alternately stacked inside a pressure vessel. Product water is transferred by diffusion during electrolysis and waste heat is conducted through the pressure wall, thus using completely passive means for transfer and control. The second alkaline system, the dedicated design, uses a separate fuel cell and electrolysis stack so that each unit can be optimized in size and weight based on its orbital operating period. The third design was a dual function stack configuration, in which each cell can operate in both fuel cell and electrolysis mode, thus eliminating the need for two separate stacks and associated equipment. Results indicate that using near term technology energy densities between 46 and 52 W-hr/lb can be achieved at efficiencies of 55 percent. System densities of 115 W-hr/lb are contemplated.

  6. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  7. Measurements of Deuteron-Induced Activation Cross Sections for IFMIF Accelerator Structural Materials

    NASA Astrophysics Data System (ADS)

    Nakao, Makoto; Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Ishioka, Noriko S.; Nishitani, Takeo

    2005-05-01

    Activation cross sections for deuteron-induced reactions on aluminum, copper, and tungsten were measured by using a stacked-foil method. The stacked foils were irradiated with deuteron beam at the AVF cyclotron in the TIARA facility, JAERI. We obtained the activation cross sections for 27Al(d,2p)27Mg, 27Al(d,x)24Na, natCu(d,x)62,63Zn, 61,64Cu, and natW(d,x)181-184,186Re, 187W in the 22-40 MeV region. These cross sections were compared with other experimental ones and the data in the ACSELAM library calculated by the ALICE-F code.

  8. Holographic shell model: Stack data structure inside black holes?

    NASA Astrophysics Data System (ADS)

    Davidson, Aharon

    2014-03-01

    Rather than tiling the black hole horizon by Planck area patches, we suggest that bits of information inhabit, universally and holographically, the entire black core interior, a bit per a light sheet unit interval of order Planck area difference. The number of distinguishable (tagged by a binary code) configurations, counted within the context of a discrete holographic shell model, is given by the Catalan series. The area entropy formula is recovered, including Cardy's universal logarithmic correction, and the equipartition of mass per degree of freedom is proven. The black hole information storage resembles, in the count procedure, the so-called stack data structure.

  9. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization

    PubMed Central

    Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead. PMID:29095934

  10. The Optimization of Automatically Generated Compilers.

    DTIC Science & Technology

    1987-01-01

    than their procedural counterparts, and are also easier to analyze for storage optimizations; (2) AGs can be algorithmically checked to be non-circular...Providing algorithms to move the storage for many attributes from the For structure tree into global stacks and variables. -Dd(2) Creating AEs which build and...54 3.5.2. Partitioning algorithm

  11. Resource allocation for error resilient video coding over AWGN using optimization approach.

    PubMed

    An, Cheolhong; Nguyen, Truong Q

    2008-12-01

    The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.

  12. Effect of band-aligned double absorber layers on photovoltaic characteristics of chemical bath deposited PbS/CdS thin film solar cells.

    PubMed

    Ho Yeon, Deuk; Chandra Mohanty, Bhaskar; Lee, Seung Min; Soo Cho, Yong

    2015-09-23

    Here we report the highest energy conversion efficiency and good stability of PbS thin film-based depleted heterojunction solar cells, not involving PbS quantum dots. The PbS thin films were grown by the low cost chemical bath deposition (CBD) process at relatively low temperatures. Compared to the quantum dot solar cells which require critical and multistep complex procedures for surface passivation, the present approach, leveraging the facile modulation of the optoelectronic properties of the PbS films by the CBD process, offers a simpler route for optimization of PbS-based solar cells. Through an architectural modification, wherein two band-aligned junctions are stacked without any intervening layers, an enhancement of conversion efficiency by as much as 30% from 3.10 to 4.03% facilitated by absorption of a wider range of solar spectrum has been obtained. As an added advantage of the low band gap PbS stacked over a wide gap PbS, the devices show stability over a period of 10 days.

  13. CIP (cleaning-in-place) stability of AlGaN/GaN pH sensors.

    PubMed

    Linkohr, St; Pletschen, W; Schwarz, S U; Anzt, J; Cimalla, V; Ambacher, O

    2013-02-20

    The CIP stability of pH sensitive ion-sensitive field-effect transistors based on AlGaN/GaN heterostructures was investigated. For epitaxial AlGaN/GaN films with high structural quality, CIP tests did not degrade the sensor surface and pH sensitivities of 55-58 mV/pH were achieved. Several different passivation schemes based on SiO(x), SiN(x), AlN, and nanocrystalline diamond were compared with special attention given to compatibility to standard microelectronic device technologies as well as biocompatibility of the passivation films. The CIP stability was evaluated with a main focus on the morphological stability. All stacks containing a SiO₂ or an AlN layer were etched by the NaOH solution in the CIP process. Reliable passivations withstanding the NaOH solution were provided by stacks of ICP-CVD grown and sputtered SiN(x) as well as diamond reinforced passivations. Drift levels about 0.001 pH/h and stable sensitivity over several CIP cycles were achieved for optimized sensor structures. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Design and simulation of novel flow field plate geometry for proton exchange membrane fuel cells

    NASA Astrophysics Data System (ADS)

    Ruan, Hanxia; Wu, Chaoqun; Liu, Shuliang; Chen, Tao

    2016-10-01

    Bipolar plate is one of the many important components of proton exchange membrane fuel cell (PEMFC) stacks as it supplies fuel and oxidant to the membrane-electrode assembly (MEA), removes water, collects produced current and provides mechanical support for the single cells in the stack. The flow field design of a bipolar plate greatly affects the performance of a PEMFC. It must uniformly distribute the reactant gases over the MEA and prevent product water flooding. This paper aims at improving the fuel cell performance by optimizing flow field designs and flow channel configurations. To achieve this, a novel biomimetic flow channel for flow field designs is proposed based on Murray's Law. Computational fluid dynamics based simulations were performed to compare three different designs (parallel, serpentine and biomimetic channel, respectively) in terms of current density distribution, power density distribution, pressure distribution, temperature distribution, and hydrogen mass fraction distribution. It was found that flow field designs with biomimetic flow channel perform better than that with convectional flow channel under the same operating conditions.

  15. Optimizing image-based patterned defect inspection through FDTD simulations at multiple ultraviolet wavelengths

    NASA Astrophysics Data System (ADS)

    Barnes, Bryan M.; Zhou, Hui; Henn, Mark-Alexander; Sohn, Martin Y.; Silver, Richard M.

    2017-06-01

    The sizes of non-negligible defects in the patterning of a semiconductor device continue to decrease as the dimensions for these devices are reduced. These "killer defects" disrupt the performance of the device and must be adequately controlled during manufacturing, and new solutions are required to improve optics-based defect inspection. To this end, our group has reported [Barnes et al., Proc. SPIE 1014516 (2017)] our initial five-wavelength simulation study, evaluating the extensibility of defect inspection by reducing the inspection wavelength from a deep-ultraviolet wavelength to wavelengths in the vacuum ultraviolet and the extreme ultraviolet. In that study, a 47 nm wavelength yielded enhancements in the signal to noise (SNR) by a factor of five compared to longer wavelengths and in the differential intensities by as much as three orders-of-magnitude compared to 13 nm. This paper briefly reviews these recent findings and investigates the possible sources for these disparities between results at 13 nm and 47 nm wavelengths. Our in-house finite-difference time-domain code (FDTD) is tested in both two and three dimensions to determine how computational conditions contributed to the results. A modified geometry and materials stack is presented that offers a second viewpoint of defect detectability as functions of wavelength, polarization, and defect type. Reapplication of the initial SNR-based defect metric again yields no detection of a defect at λ = 13 nm, but additional image preprocessing now enables the computation of the SNR for λ = 13 nm simulated images and has led to a revised defect metric that allows comparisons at all five wavelengths.

  16. Design of hydrogen vent line for the cryogenic hydrogen system in J-PARC

    NASA Astrophysics Data System (ADS)

    Tatsumoto, Hideki; Aso, Tomokazu; Kato, Takashi; Ohtsu, Kiichi; Hasegawa, Shoichi; Maekawa, Fujio; Futakawa, Masatoshi

    2009-02-01

    As one of the main experimental facilities in J-PARC, an intense spallation neutron source (JSNS) driven by a 1-MW proton beam selected supercritical hydrogen at a temperature of 20 K and a pressure of 1.5 MPa as a moderator material. Moderators are controlled by a cryogenic hydrogen system that has a hydrogen relief system, which consists of high and low pressure stage of manifolds, a hydrogen vent line and a stack, in order to release hydrogen to the outside safely. The design of the hydrogen vent line should be considered to prevent purge nitrogen gas in the vent line from freezing when releasing the cryogenic hydrogen, to prevent moisture in the stack placed in an outdoor location from freezing, and to inhibit large piping temperature reduction at a building wall penetration. In this work, temperature change behaviors in the hydrogen vent line were analyzed by using a CFD code, STAR-CD. We determined required sizes of the vent line based on the analytical results and its layout in the building.

  17. Electron transport in graphene/graphene side-contact junction by plane-wave multiple-scattering method

    DOE PAGES

    Li, Xiang-Guo; Chu, Iek-Heng; Zhang, X. -G.; ...

    2015-05-28

    Electron transport in graphene is along the sheet but junction devices are often made by stacking different sheets together in a “side-contact” geometry which causes the current to flow perpendicular to the sheets within the device. Such geometry presents a challenge to first-principles transport methods. We solve this problem by implementing a plane-wave-based multiple-scattering theory for electron transport. In this study, this implementation improves the computational efficiency over the existing plane-wave transport code, scales better for parallelization over large number of nodes, and does not require the current direction to be along a lattice axis. As a first application, wemore » calculate the tunneling current through a side-contact graphene junction formed by two separate graphene sheets with the edges overlapping each other. We find that transport properties of this junction depend strongly on the AA or AB stacking within the overlapping region as well as the vacuum gap between two graphene sheets. Finally, such transport behaviors are explained in terms of carbon orbital orientation, hybridization, and delocalization as the geometry is varied.« less

  18. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  19. Proton exchange membrane fuel cells cold startup global strategy for fuel cell plug-in hybrid electric vehicle

    NASA Astrophysics Data System (ADS)

    Henao, Nilson; Kelouwani, Sousso; Agbossou, Kodjo; Dubé, Yves

    2012-12-01

    This paper investigates the Proton Exchange Membrane Fuel Cell (PEMFC) Cold Startup problem within the specific context of the Plugin Hybrid Electric Vehicles (PHEV). A global strategy which aims at providing an efficient method to minimize the energy consumption during the startup of a PEMFC is proposed. The overall control system is based on a supervisory architecture in which the Energy Management System (EMS) plays the role of the power flow supervisor. The EMS estimates in advance, the time to start the fuel cell (FC) based upon the battery energy usage during the trip. Given this estimation and the amount of additional energy required, the fuel cell temperature management strategy computes the most appropriate time to start heating the stack in order to reduce heat loss through the natural convection. As the cell temperature rises, the PEMFC is started and the reaction heat is used as a self-heating power source to further increase the stack temperature. A time optimal self-heating approach based on the Pontryagin minimum principle is proposed and tested. The experimental results have shown that the proposed approach is efficient and can be implemented in real-time on FC-PHEVs.

  20. Stacking multiple connecting functional materials in tandem organic light-emitting diodes

    PubMed Central

    Zhang, Tao; Wang, Deng-Ke; Jiang, Nan; Lu, Zheng-Hong

    2017-01-01

    Tandem device is an important architecture in fabricating high performance organic light-emitting diodes and organic photovoltaic cells. The key element in making a high performance tandem device is the connecting materials stack, which plays an important role in electric field distribution, charge generation and charge injection. For a tandem organic light-emitting diode (OLED) with a simple Liq/Al/MoO3 stack, we discovered that there is a significant current lateral spreading causing light emission over an extremely large area outside the OLED pixel when the Al thickness exceeds 2 nm. This spread light emission, caused by an inductive electric field over one of the device unit, limits one’s ability to fabricate high performance tandem devices. To resolve this issue, a new connecting materials stack with a C60 fullerene buffer layer is reported. This new structure permits optimization of the Al metal layer in the connecting stack and thus enables us to fabricate an efficient tandem OLED having a high 155.6 cd/A current efficiency and a low roll-off (or droop) in current efficiency. PMID:28225028

  1. Stacking multiple connecting functional materials in tandem organic light-emitting diodes

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Wang, Deng-Ke; Jiang, Nan; Lu, Zheng-Hong

    2017-02-01

    Tandem device is an important architecture in fabricating high performance organic light-emitting diodes and organic photovoltaic cells. The key element in making a high performance tandem device is the connecting materials stack, which plays an important role in electric field distribution, charge generation and charge injection. For a tandem organic light-emitting diode (OLED) with a simple Liq/Al/MoO3 stack, we discovered that there is a significant current lateral spreading causing light emission over an extremely large area outside the OLED pixel when the Al thickness exceeds 2 nm. This spread light emission, caused by an inductive electric field over one of the device unit, limits one’s ability to fabricate high performance tandem devices. To resolve this issue, a new connecting materials stack with a C60 fullerene buffer layer is reported. This new structure permits optimization of the Al metal layer in the connecting stack and thus enables us to fabricate an efficient tandem OLED having a high 155.6 cd/A current efficiency and a low roll-off (or droop) in current efficiency.

  2. Refreshable Braille displays using EAP actuators

    NASA Astrophysics Data System (ADS)

    Bar-Cohen, Yoseph

    2010-04-01

    Refreshable Braille can help visually impaired persons benefit from the growing advances in computer technology. The development of such displays in a full screen form is a great challenge due to the need to pack many actuators in small area without interferences. In recent years, various displays using actuators such as piezoelectric stacks have become available in commercial form but most of them are limited to one line Braille code. Researchers in the field of electroactive polymers (EAP) investigated methods of using these materials to form full screen displays. This manuscript reviews the state of the art of producing refreshable Braille displays using EAP-based actuators.

  3. Refreshable Braille Displays Using EAP Actuators

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph

    2010-01-01

    Refreshable Braille can help visually impaired persons benefit from the growing advances in computer technology. The development of such displays in a full screen form is a great challenge due to the need to pack many actuators in small area without interferences. In recent years, various displays using actuators such as piezoelectric stacks have become available in commercial form but most of them are limited to one line Braille code. Researchers in the field of electroactive polymers (EAP) investigated methods of using these materials to form full screen displays. This manuscript reviews the state of the art of producing refreshable Braille displays using EAP-based actuators..

  4. An integration machine for the assembly of the x-ray optic units based on thin slumped glass foils for the IXO mission

    NASA Astrophysics Data System (ADS)

    Civitani, M. M.; Basso, S.; Bavdaz, M.; Citterio, O.; Conconi, P.; Gallieni, D.; Ghigo, M.; Martelli, F.; Pareschi, G.; Parodi, G.; Proserpio, L.; Sironi, G.; Spiga, D.; Tagliaferri, G.; Tintori, M.; Wille, E.; Zambra, A.

    2011-09-01

    The International X-ray Observatory (IXO) is a joint mission concept studied by the ESA, NASA, and JAXA space agencies. The main goal of the mission design is to achieve a large effective area (>2.5m2 at 1 keV) and a good angular resolution (5 arcsec HEW at 1 keV) at the same time. The Brera Astronomical Observatory - INAF, Italy), under the support of ESA, is developing a method for the realization of the X-Ray Optical Units, based on the use of slumped thin glass segments to form densely packed modules in a Wolter type I optical configuration. In order to reach the very challenging integration requirements, it has been developed an innovative assembly approach for aligning and mounting the IXO mirror segments. The method is based on the use of an integration mould for each foil. In particular the glass segment is forced to adhere to the integration mould in order to maintain the optimal figure without deformations until the integration of the foil in the stack is completed. In this way an active correction for major existing figure errors after slumping is also achieved. Moreover reinforcing ribs are used in order to connect the facets to each-other and to realize a robust monolithic stack of plates. In this paper we present the design, the development and the validation status of a special Integration Machine (IMA) that has been specifically developed to allow the integration of the Plate Pairs into prototypal X-Ray Optical Unit stacks.

  5. [Quality management and strategic consequences of assessing documentation and coding under the German Diagnostic Related Groups system].

    PubMed

    Schnabel, M; Mann, D; Efe, T; Schrappe, M; V Garrel, T; Gotzen, L; Schaeg, M

    2004-10-01

    The introduction of the German Diagnostic Related Groups (D-DRG) system requires redesigning administrative patient management strategies. Wrong coding leads to inaccurate grouping and endangers the reimbursement of treatment costs. This situation emphasizes the roles of documentation and coding as factors of economical success. The aims of this study were to assess the quantity and quality of initial documentation and coding (ICD-10 and OPS-301) and find operative strategies to improve efficiency and strategic means to ensure optimal documentation and coding quality. In a prospective study, documentation and coding quality were evaluated in a standardized way by weekly assessment. Clinical data from 1385 inpatients were processed for initial correctness and quality of documentation and coding. Principal diagnoses were found to be accurate in 82.7% of cases, inexact in 7.1%, and wrong in 10.1%. Effects on financial returns occurred in 16%. Based on these findings, an optimized, interdisciplinary, and multiprofessional workflow on medical documentation, coding, and data control was developed. Workflow incorporating regular assessment of documentation and coding quality is required by the DRG system to ensure efficient accounting of hospital services. Interdisciplinary and multiprofessional cooperation is recognized to be an important factor in establishing an efficient workflow in medical documentation and coding.

  6. Is it possible to design a portable power generator based on micro-solid oxide fuel cells? A finite volume analysis

    NASA Astrophysics Data System (ADS)

    Pla, D.; Sánchez-González, A.; Garbayo, I.; Salleras, M.; Morata, A.; Tarancón, A.

    2015-10-01

    The inherent limited capacity of current battery technology is not sufficient for covering the increasing power requirements of widely extended portable devices. Among other promising alternatives, recent advances in the field of micro-Solid Oxide Fuel Cells (μ-SOFCs) converted this disruptive technology into a serious candidate to power next generations of portable devices. However, the implementation of single cells in real devices, i.e. μ-SOFC stacks coupled to the required balance-of-plant elements like fuel reformers or post combustors, still remains unexplored. This work aims addressing this system-level research by proposing a new compact design of a vertically stacked device fuelled with ethanol. The feasibility and design optimization for achieving a thermally self-sustained regime and a rapid and low-power consuming start-up is studied by finite volume analysis. An optimal thermal insulation strategy is defined to maintain the steady-state operation temperature of the μ-SOFC at 973 K and an external temperature lower than 323 K. A hybrid start-up procedure, based on heaters embedded in the μ-SOFCs and heat released by chemical reactions in the post-combustion unit, is analyzed allowing start-up times below 1 min and energy consumption under 500 J. These results clearly demonstrate the feasibility of high temperature μ-SOFC power systems fuelled with hydrocarbons for portable applications, therefore, anticipating a new family of mobile and uninterrupted power generators.

  7. Optimal robust control strategy of a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojuan; Gao, Danhui

    2018-01-01

    Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.

  8. System optimization on coded aperture spectrometer

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Ding, Quanxin; Wang, Helong; Chen, Hongliang; Guo, Chunjie; Zhou, Liwei

    2017-10-01

    For aim to find a simple multiple configuration solution and achieve higher refractive efficiency, and based on to reduce the situation disturbed by FOV change, especially in a two-dimensional spatial expansion. Coded aperture system is designed by these special structure, which includes an objective a coded component a prism reflex system components, a compensatory plate and an imaging lens Correlative algorithms and perfect imaging methods are available to ensure this system can be corrected and optimized adequately. Simulation results show that the system can meet the application requirements in MTF, REA, RMS and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration.

  9. A Microfabricated Involute-Foil Regenerator for Stirling Engines

    NASA Technical Reports Server (NTRS)

    Tew, Roy; Ibrahim, Mounir; Danila, Daniel; Simon, Terrence; Mantell, Susan; Sun, Liyong; Gedeon, David; Kelly, Kevin; McLean, Jeffrey; Qiu, Songgang

    2007-01-01

    A segmented involute-foil regenerator has been designed, microfabricated and tested in an oscillating-flow rig with excellent results. During the Phase I effort, several approximations of parallel-plate regenerator geometry were chosen as potential candidates for a new microfabrication concept. Potential manufacturers and processes were surveyed. The selected concept consisted of stacked segmented-involute-foil disks (or annular portions of disks), originally to be microfabricated from stainless-steel via the LiGA (lithography, electroplating, and molding) process and EDM. During Phase II, re-planning of the effort led to test plans based on nickel disks, microfabricated via the LiGA process, only. A stack of nickel segmented-involute-foil disks was tested in an oscillating-flow test rig. These test results yielded a performance figure of merit (roughly the ratio of heat transfer to pressure drop) of about twice that of the 90 percent random fiber currently used in small approx.100 W Stirling space-power convertors-in the Reynolds Number range of interest (50 to 100). A Phase III effort is now underway to fabricate and test a segmented-involute-foil regenerator in a Stirling convertor. Though funding limitations prevent optimization of the Stirling engine geometry for use with this regenerator, the Sage computer code will be used to help evaluate the engine test results. Previous Sage Stirling model projections have indicated that a segmented-involute-foil regenerator is capable of improving the performance of an optimized involute-foil engine by 6 to 9 percent; it is also anticipated that such involute-foil geometries will be more reliable and easier to manufacture with tight-tolerance characteristics, than random-fiber or wire-screen regenerators. Beyond the near-term Phase III regenerator fabrication and engine testing, other goals are (1) fabrication from a material suitable for high temperature Stirling operation (up to 850 C for current engines; up to 1200 C for a potential engine-cooler for a Venus mission), and (2) reduction of the cost of the fabrication process to make it more suitable for terrestrial applications of segmented involute foils. Past attempts have been made to use wrapped foils to approximate the large theoretical figures of merit projected for parallel plates. Such metal wrapped foils have never proved very successful, apparently due to the difficulties of fabricating wrapped-foils with uniform gaps and maintaining the gaps under the stress of time-varying temperature gradients during start-up and shut-down, and relatively-steady temperature gradients during normal operation. In contrast, stacks of involute-foil disks, with each disk consisting of multiple involute-foil segments held between concentric circular ribs, have relatively robust structures. The oscillating-flow rig tests of the segmented-involute-foil regenerator have demonstrated a shift in regenerator performance strongly in the direction of the theoretical performance of ideal parallel-plate regenerators.

  10. A Microfabricated Involute-Foil Regenerator for Stirling Engines

    NASA Technical Reports Server (NTRS)

    Tew, Roy; Ibrahim, Mounir; Danila, Daniel; Simon, Terry; Mantell, Susan; Sun, Liyong; Gedeon, David; Kelly, Kevin; McLean, Jeffrey; Wood, Gary; hide

    2007-01-01

    A segmented involute-foil regenerator has been designed, microfabricated and tested in an oscillating-flow rig with excellent results. During the Phase I effort, several approximations of parallel-plate regenerator geometry were chosen as potential candidates for a new microfabrication concept. Potential manufacturers and processes were surveyed. The selected concept consisted of stacked segmented-involute-foil disks (or annular portions of disks), originally to be microfabricated from stainless-steel via the LiGA (lithography, electroplating, and molding) process and EDM (electric discharge machining). During Phase II, re-planning of the effort led to test plans based on nickel disks, microfabricated via the LiGA process, only. A stack of nickel segmented-involute-foil disks was tested in an oscillating-flow test rig. These test results yielded a performance figure of merit (roughly the ratio of heat transfer to pressure drop) of about twice that of the 90% random fiber currently used in small 100 W Stirling space-power convertors in the Reynolds Number range of interest (50-100). A Phase III effort is now underway to fabricate and test a segmented-involute-foil regenerator in a Stirling convertor. Though funding limitations prevent optimization of the Stirling engine geometry for use with this regenerator, the Sage computer code will be used to help evaluate the engine test results. Previous Sage Stirling model projections have indicated that a segmented-involute-foil regenerator is capable of improving the performance of an optimized involute-foil engine by 6-9%; it is also anticipated that such involute-foil geometries will be more reliable and easier to manufacture with tight-tolerance characteristics, than random-fiber or wire-screen regenerators. Beyond the near-term Phase III regenerator fabrication and engine testing, other goals are (1) fabrication from a material suitable for high temperature Stirling operation (up to 850 C for current engines; up to 1200 C for a potential engine-cooler for a Venus mission), and (2) reduction of the cost of the fabrication process to make it more suitable for terrestrial applications of segmented involute foils. Past attempts have been made to use wrapped foils to approximate the large theoretical figures of merit projected for parallel plates. Such metal wrapped foils have never proved very successful, apparently due to the difficulties of fabricating wrapped-foils with uniform gaps and maintaining the gaps under the stress of time-varying temperature gradients during start-up and shut-down, and relatively-steady temperature gradients during normal operation. In contrast, stacks of involute-foil disks, with each disk consisting of multiple involute-foil segments held between concentric circular ribs, have relatively robust structures. The oscillating-flow rig tests of the segmented-involute-foil regenerator have demonstrated a shift in regenerator performance strongly in the direction of the theoretical performance of ideal parallel-plate regenerators.

  11. Effects of sugar functional groups, hydrophobicity, and fluorination on carbohydrate-DNA stacking interactions in water.

    PubMed

    Lucas, Ricardo; Peñalver, Pablo; Gómez-Pinto, Irene; Vengut-Climent, Empar; Mtashobya, Lewis; Cousin, Jonathan; Maldonado, Olivia S; Perez, Violaine; Reynes, Virginie; Aviñó, Anna; Eritja, Ramón; González, Carlos; Linclau, Bruno; Morales, Juan C

    2014-03-21

    Carbohydrate-aromatic interactions are highly relevant for many biological processes. Nevertheless, experimental data in aqueous solution relating structure and energetics for sugar-arene stacking interactions are very scarce. Here, we evaluate how structural variations in a monosaccharide including carboxyl, N-acetyl, fluorine, and methyl groups affect stacking interactions with aromatic DNA bases. We find small differences on stacking interaction among the natural carbohydrates examined. The presence of fluorine atoms within the pyranose ring slightly increases the interaction with the C-G DNA base pair. Carbohydrate hydrophobicity is the most determinant factor. However, gradual increase in hydrophobicity of the carbohydrate does not translate directly into a steady growth in stacking interaction. The energetics correlates better with the amount of apolar surface buried upon sugar stacking on top of the aromatic DNA base pair.

  12. Program optimizations: The interplay between power, performance, and energy

    DOE PAGES

    Leon, Edgar A.; Karlin, Ian; Grant, Ryan E.; ...

    2016-05-16

    Practical considerations for future supercomputer designs will impose limits on both instantaneous power consumption and total energy consumption. Working within these constraints while providing the maximum possible performance, application developers will need to optimize their code for speed alongside power and energy concerns. This paper analyzes the effectiveness of several code optimizations including loop fusion, data structure transformations, and global allocations. A per component measurement and analysis of different architectures is performed, enabling the examination of code optimizations on different compute subsystems. Using an explicit hydrodynamics proxy application from the U.S. Department of Energy, LULESH, we show how code optimizationsmore » impact different computational phases of the simulation. This provides insight for simulation developers into the best optimizations to use during particular simulation compute phases when optimizing code for future supercomputing platforms. Here, we examine and contrast both x86 and Blue Gene architectures with respect to these optimizations.« less

  13. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less

  14. Phenotypic Graphs and Evolution Unfold the Standard Genetic Code as the Optimal

    NASA Astrophysics Data System (ADS)

    Zamudio, Gabriel S.; José, Marco V.

    2018-03-01

    In this work, we explicitly consider the evolution of the Standard Genetic Code (SGC) by assuming two evolutionary stages, to wit, the primeval RNY code and two intermediate codes in between. We used network theory and graph theory to measure the connectivity of each phenotypic graph. The connectivity values are compared to the values of the codes under different randomization scenarios. An error-correcting optimal code is one in which the algebraic connectivity is minimized. We show that the SGC is optimal in regard to its robustness and error-tolerance when compared to all random codes under different assumptions.

  15. Optimization of properties and operating parameters of a passive DMFC mini-stack at ambient temperature

    NASA Astrophysics Data System (ADS)

    Baglio, V.; Stassi, A.; Matera, F. V.; Di Blasi, A.; Antonucci, V.; Aricò, A. S.

    An investigation of properties and operating parameters of a passive DMFC monopolar mini-stack, such as catalyst loading and methanol concentration, was carried out. From this analysis, it was derived that a proper Pt loading is necessary to achieve the best compromise between electrode thickness and number of catalytic sites for the anode and cathode reactions to occur at suitable rates. Methanol concentrations ranging from 1 M up to 10 M and an air-breathing operation mode were investigated. A maximum power of 225 mW was obtained at ambient conditions for a three-cell stack, with an active single cell area of 4 cm 2, corresponding to a power density of about 20 mW cm -2.

  16. Protograph-Based Raptor-Like Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.

    2014-01-01

    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.

  17. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  18. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  19. Improvement of the cruise performances of a wing by means of aerodynamic optimization. Validation with a Far-Field method

    NASA Astrophysics Data System (ADS)

    Jiménez-Varona, J.; Ponsin Roca, J.

    2015-06-01

    Under a contract with AIRBUS MILITARY (AI-M), an exercise to analyze the potential of optimization techniques to improve the wing performances at cruise conditions has been carried out by using an in-house design code. The original wing was provided by AI-M and several constraints were posed for the redesign. To maximize the aerodynamic efficiency at cruise, optimizations were performed using the design techniques developed internally at INTA under a research program (Programa de Termofluidodinámica). The code is a gradient-based optimizaa tion code, which uses classical finite differences approach for gradient computations. Several techniques for search direction computation are implemented for unconstrained and constrained problems. Techniques for geometry modifications are based on different approaches which include perturbation functions for the thickness and/or mean line distributions and others by Bézier curves fitting of certain degree. It is very e important to afford a real design which involves several constraints that reduce significantly the feasible design space. And the assessment of the code is needed in order to check the capabilities and the possible drawbacks. Lessons learnt will help in the development of future enhancements. In addition, the validation of the results was done using also the well-known TAU flow solver and a far-field drag method in order to determine accurately the improvement in terms of drag counts.

  20. Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.

    2016-02-01

    High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  1. Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking.

    PubMed

    Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J; Jian, Yifan; Sarunic, Marinko V

    2016-02-01

    High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  2. Improvements on non-equilibrium and transport Green function techniques: The next-generation TRANSIESTA

    NASA Astrophysics Data System (ADS)

    Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads

    2017-03-01

    We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF "post-processing" code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.

  3. Fusion PIC code performance analysis on the Cori KNL system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, Tuomas S.; Deslippe, Jack; Friesen, Brian

    We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization ismore » shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.« less

  4. Deep Learning Methods for Improved Decoding of Linear Codes

    NASA Astrophysics Data System (ADS)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  5. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1992-01-01

    Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.

  6. Integrated strategic and tactical biomass-biofuel supply chain optimization.

    PubMed

    Lin, Tao; Rodríguez, Luis F; Shastri, Yogendra N; Hansen, Alan C; Ting, K C

    2014-03-01

    To ensure effective biomass feedstock provision for large-scale biofuel production, an integrated biomass supply chain optimization model was developed to minimize annual biomass-ethanol production costs by optimizing both strategic and tactical planning decisions simultaneously. The mixed integer linear programming model optimizes the activities range from biomass harvesting, packing, in-field transportation, stacking, transportation, preprocessing, and storage, to ethanol production and distribution. The numbers, locations, and capacities of facilities as well as biomass and ethanol distribution patterns are key strategic decisions; while biomass production, delivery, and operating schedules and inventory monitoring are key tactical decisions. The model was implemented to study Miscanthus-ethanol supply chain in Illinois. The base case results showed unit Miscanthus-ethanol production costs were $0.72L(-1) of ethanol. Biorefinery related costs accounts for 62% of the total costs, followed by biomass procurement costs. Sensitivity analysis showed that a 50% reduction in biomass yield would increase unit production costs by 11%. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. π-π stacking tackled with density functional theory

    PubMed Central

    Swart, Marcel; van der Wijst, Tushar; Fonseca Guerra, Célia

    2007-01-01

    Through comparison with ab initio reference data, we have evaluated the performance of various density functionals for describing π-π interactions as a function of the geometry between two stacked benzenes or benzene analogs, between two stacked DNA bases, and between two stacked Watson–Crick pairs. Our main purpose is to find a robust and computationally efficient density functional to be used specifically and only for describing π-π stacking interactions in DNA and other biological molecules in the framework of our recently developed QM/QM approach "QUILD". In line with previous studies, most standard density functionals recover, at best, only part of the favorable stacking interactions. An exception is the new KT1 functional, which correctly yields bound π-stacked structures. Surprisingly, a similarly good performance is achieved with the computationally very robust and efficient local density approximation (LDA). Furthermore, we show that classical electrostatic interactions determine the shape and depth of the π-π stacking potential energy surface. Figure Additivity approximation for the π-π interaction between two stacked Watson–Crick base pairs in terms of pairwise interactions between individual bases Electronic supplementary material The online version of this article (doi:10.1007/s00894-007-0239-y) contains supplementary material, which is available to authorized users. PMID:17874150

  8. A new DWT/MC/DPCM video compression framework based on EBCOT

    NASA Astrophysics Data System (ADS)

    Mei, L. M.; Wu, H. R.; Tan, D. M.

    2005-07-01

    A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.

  9. Investigation on the use of optimization techniques for helicopter airframe vibrations design studies

    NASA Technical Reports Server (NTRS)

    Sreekanta Murthy, T.

    1992-01-01

    Results of the investigation of formal nonlinear programming-based numerical optimization techniques of helicopter airframe vibration reduction are summarized. The objective and constraint function and the sensitivity expressions used in the formulation of airframe vibration optimization problems are presented and discussed. Implementation of a new computational procedure based on MSC/NASTRAN and CONMIN in a computer program system called DYNOPT for optimizing airframes subject to strength, frequency, dynamic response, and dynamic stress constraints is described. An optimization methodology is proposed which is thought to provide a new way of applying formal optimization techniques during the various phases of the airframe design process. Numerical results obtained from the application of the DYNOPT optimization code to a helicopter airframe are discussed.

  10. Stacking stability of MoS2 bilayer: An ab initio study

    NASA Astrophysics Data System (ADS)

    Tao, Peng; Guo, Huai-Hong; Yang, Teng; Zhang, Zhi-Dong

    2014-10-01

    The study of the stacking stability of bilayer MoS2 is essential since a bilayer has exhibited advantages over single layer MoS2 in many aspects for nanoelectronic applications. We explored the relative stability, optimal sliding path between different stacking orders of bilayer MoS2, and (especially) the effect of inter-layer stress, by combining first-principles density functional total energy calculations and the climbing-image nudge-elastic-band (CI-NEB) method. Among five typical stacking orders, which can be categorized into two kinds (I: AA, AB and II: AA', AB', A'B), we found that stacking orders with Mo and S superposing from both layers, such as AA' and AB, is more stable than the others. With smaller computational efforts than potential energy profile searching, we can study the effect of inter-layer stress on the stacking stability. Under isobaric condition, the sliding barrier increases by a few eV/(ucGPa) from AA' to AB', compared to 0.1 eV/(ucGPa) from AB to [AB]. Moreover, we found that interlayer compressive stress can help enhance the transport properties of AA'. This study can help understand why inter-layer stress by dielectric gating materials can be an effective means to improving MoS2 on nanoelectronic applications.

  11. Revealing the preferred interlayer orientations and stackings of two-dimensional bilayer gallium selenide crystals

    DOE PAGES

    Li, Xufan; Basile Carrasco, Leonardo A.; Yoon, Mina; ...

    2015-01-21

    Characterizing and controlling the interlayer orientations and stacking order of bilayer two-dimensional (2D) crystals and van der Waals (vdW) heterostructure is crucial to optimize their electrical and optoelectronic properties. The four polymorphs of layered gallium selenide (GaSe) that result from different layer stacking provide an ideal platform to study the stacking configurations in bilayer 2D crystals. Here, through a controllable vapor-phase deposition method we selectively grow bilayer GaSe crystals and investigate their two preferred 0° or 60° interlayer rotations. The commensurate stacking configurations (AA' and AB-stacking) in as-grown 2D bilayer GaSe crystals are clearly observed at the atomic scale andmore » the Ga-terminated edge structure are identified for the first time by using atomic-resolution scanning transmission electron microscopy (STEM). Theoretical analysis of the interlayer coupling energetics vs. interlayer rotation angle reveals that the experimentally-observed orientations are energetically preferred among the bilayer GaSe crystal polytypes. Here, the combined experimental and theoretical characterization of the GaSe bilayers afforded by these growth studies provide a pathway to reveal the atomistic relationships in interlayer orientations responsible for the electronic and optical properties of bilayer 2D crystals and vdW heterostructures.« less

  12. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    PubMed

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  13. Nitrogen Doping Enables Covalent-Like π–π Bonding between Graphenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Yong-Hui; Huang, Jingsong; Sheng, Xiaolan

    In neighboring layers of bilayer (and few-layer) graphenes, both AA and AB stacking motifs are known to be separated at a distance corresponding to van der Waals (vdW) interactions. In this Letter, we present for the first time a new aspect of graphene chemistry in terms of a special chemical bonding between the giant graphene "molecules". Through rigorous theoretical calculations, we demonstrate that the N-doped graphenes (NGPs) with various doping levels can form an unusual two-dimensional (2D) pi-pi bonding in bilayer NGPs bringing the neighboring NGPs to significantly reduced interlayer separations. The interlayer binding energies can be enhanced by upmore » to 50% compared to the pristine graphene bilayers that are characterized by only vdW interactions. Such an unusual chemical bonding arises from the pi-pi overlap across the vdW gap while the individual layers maintain their in-plane pi-conjugation and are accordingly planar. Moreover, the existence of the resulting interlayer covalent-like bonding is corroborated by electronic structure calculations and crystal orbital overlap population (COOP) analyses. In NGP-based graphite with the optimal doping level, the NGP layers are uniformly stacked and the 3D bulk exhibits metallic characteristics both in the in-plane and along the stacking directions.« less

  14. Nitrogen-Doping Enables Covalent-Like pi-pi Bonding between Graphenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Yong-Hui; Huang, Jingsong; Sumpter, Bobby G

    The neighboring layers in bi-layer (and few-layer) graphenes of both AA and AB stacking motifs are known to be separated at a distance corresponding to van der Waals (vdW) interactions. In this Letter, we present for the first time a new aspect of graphene chemistry in terms of a special chemical bonding between the giant graphene molecules . Through rigorous theoretical calculations, we demonstrate that the N-doped graphenes (NGPs) with various doping levels can form an unusual two-dimensional (2D) pi pi bonding in bi-layer NGPs bringing the neighboring NGPs to significantly reduced interlayer separations. The interlayer binding energies can bemore » enhanced by up to 50% compared to the pristine graphene bi-layers that are characterized by only vdW interactions. Such an unusual chemical bonding arises from the pi pi overlap across the vdW gap while the individual layers maintain their in-plane pi-conjugation and are accordingly planar. The existence of the resulting interlayer covalent-like bonding is corroborated by electronic structure calculations and crystal orbital overlap population (COOP) analyses. In NGP-based graphite with the optimal doping level, the NGP layers are uniformly stacked and the 3D bulk exhibits metallic characteristics both in the in-plane and along the stacking directions.« less

  15. Nitrogen Doping Enables Covalent-Like π–π Bonding between Graphenes

    DOE PAGES

    Tian, Yong-Hui; Huang, Jingsong; Sheng, Xiaolan; ...

    2015-07-07

    In neighboring layers of bilayer (and few-layer) graphenes, both AA and AB stacking motifs are known to be separated at a distance corresponding to van der Waals (vdW) interactions. In this Letter, we present for the first time a new aspect of graphene chemistry in terms of a special chemical bonding between the giant graphene "molecules". Through rigorous theoretical calculations, we demonstrate that the N-doped graphenes (NGPs) with various doping levels can form an unusual two-dimensional (2D) pi-pi bonding in bilayer NGPs bringing the neighboring NGPs to significantly reduced interlayer separations. The interlayer binding energies can be enhanced by upmore » to 50% compared to the pristine graphene bilayers that are characterized by only vdW interactions. Such an unusual chemical bonding arises from the pi-pi overlap across the vdW gap while the individual layers maintain their in-plane pi-conjugation and are accordingly planar. Moreover, the existence of the resulting interlayer covalent-like bonding is corroborated by electronic structure calculations and crystal orbital overlap population (COOP) analyses. In NGP-based graphite with the optimal doping level, the NGP layers are uniformly stacked and the 3D bulk exhibits metallic characteristics both in the in-plane and along the stacking directions.« less

  16. Quencher-Free Fluorescence Method for the Detection of Mercury(II) Based on Polymerase-Aided Photoinduced Electron Transfer Strategy.

    PubMed

    Liu, Haisheng; Ma, Linbin; Ma, Changbei; Du, Junyan; Wang, Meilan; Wang, Kemin

    2016-11-18

    A new quencher-free Hg 2+ ion assay method was developed based on polymerase-assisted photoinduced electron transfer (PIET). In this approach, a probe is designed with a mercury ion recognition sequence (MRS) that is composed of two T-rich functional areas separated by a spacer of random bases at the 3'-end, and a sequence of stacked cytosines at the 5'-end, to which a fluorescein (FAM) is attached. Upon addition of Hg 2+ ions into this sensing system, the MRS folds into a hairpin structure at the 3'-end with Hg 2+ -mediated base pairs. In the presence of DNA polymerase, it will catalyze the extension reaction, resulting in the formation of stacked guanines, which will instantly quench the fluorescence of FAM through PIET. Under optimal conditions, the limit of detection for Hg 2+ ions was estimated to be 5 nM which is higher than the US Environmental Protection Agency (EPA) standard limit. In addition, no labeling with a quencher was requiring, and the present method is fairly simple, fast and low cost. It is expected that this cost-effective fluorescence method might hold considerable potential in the detection of Hg 2+ ions in real biological and environmental samples.

  17. Mobility of rare earth elements, yttrium and scandium from a phosphogypsum stack: Environmental and economic implications.

    PubMed

    Cánovas, Carlos Ruiz; Macías, Francisco; Pérez López, Rafael; Nieto, José Miguel

    2018-03-15

    This paper investigates the mobility and fluxes of REE, Y and Sc under weathering conditions from an anomalously metal-rich phosphogypsum stack in SW Spain. The interactions of the phosphogypsum stack with rainfall and organic matter-rich solutions, simulating the weathering processes observed due to its location on salt-marshes, were simulated by leaching tests (e.g. EN 12457-2 and TCLP). Despite the high concentration of REE, Y and Sc contained in the phosphogypsum stack, their mobility during the leaching tests was very low; <0.66% and 1.8% of the total content of these elements were released during both tests. Chemical and mineralogical evidences suggest that phosphate minerals may act as sources of REE and Y in the phosphogypsum stack while fluoride minerals may act as sinks, controlling their mobility. REE fractionation processes were identified in the phosphogypsum stack; a depletion of LREE in the saturated zone was identified due probably to the dissolution of secondary LREE phosphates previously formed during apatite dissolution in the industrial process. Thus, the vadose zone of the stack would preserve the original REE signature of phosphate rocks. On the other hand, an enrichment of MREE in relation to HREE of edge outflows is observed due to the higher influence of estuarine waters on the leaching process of the phosphogypsum stack. Despite the low mobility of REE, Y and Sc in the phosphogypsum, around 104kg/yr of REE and 40kg/yr of Y and Sc are released from the stack to the estuary, which may imply an environmental concern. The information obtained in this study could be used to optimize extraction methods aimed to recover REE, Y and Sc from phosphogypsum, mitigating the pollution to the environment. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Bragg reflector based gate stack architecture for process integration of excimer laser annealing

    NASA Astrophysics Data System (ADS)

    Fortunato, G.; Mariucci, L.; Cuscunà, M.; Privitera, V.; La Magna, A.; Spinella, C.; Magrı, A.; Camalleri, M.; Salinas, D.; Simon, F.; Svensson, B.; Monakhov, E.

    2006-12-01

    An advanced gate stack structure, which incorporates a Bragg reflector, has been developed for the integration of excimer laser annealing into the power metal-oxide semiconductor (MOS) transistor fabrication process. This advanced gate structure effectively protects the gate stack from melting, thus solving the problem related to protrusion formation. By using this gate stack configuration, power MOS transistors were fabricated with improved electrical characteristics. The Bragg reflector based gate stack architecture can be applied to other device structures, such as scaled MOS transistors, thus extending the possibilities of process integration of excimer laser annealing.

  19. Shared Memory Parallelization of an Implicit ADI-type CFD Code

    NASA Technical Reports Server (NTRS)

    Hauser, Th.; Huang, P. G.

    1999-01-01

    A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.

  20. Optimizations of a Hardware Decoder for Deep-Space Optical Communications

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Nakashima, Michael A.; Moision, Bruce E.; Hamkins, Jon

    2007-01-01

    The National Aeronautics and Space Administration has developed a capacity approaching modulation and coding scheme that comprises a serial concatenation of an inner accumulate pulse-position modulation (PPM) and an outer convolutional code [or serially concatenated PPM (SCPPM)] for deep-space optical communications. Decoding of this code uses the turbo principle. However, due to the nonbinary property of SCPPM, a straightforward application of classical turbo decoding is very inefficient. Here, we present various optimizations applicable in hardware implementation of the SCPPM decoder. More specifically, we feature a Super Gamma computation to efficiently handle parallel trellis edges, a pipeline-friendly 'maxstar top-2' circuit that reduces the max-only approximation penalty, a low-latency cyclic redundancy check circuit for window-based decoders, and a high-speed algorithmic polynomial interleaver that leads to memory savings. Using the featured optimizations, we implement a 6.72 megabits-per-second (Mbps) SCPPM decoder on a single field-programmable gate array (FPGA). Compared to the current data rate of 256 kilobits per second from Mars, the SCPPM coded scheme represents a throughput increase of more than twenty-six fold. Extension to a 50-Mbps decoder on a board with multiple FPGAs follows naturally. We show through hardware simulations that the SCPPM coded system can operate within 1 dB of the Shannon capacity at nominal operating conditions.

  1. MECHANICAL PROPERTY CHARACTERIZATIONS AND PERFORMANCE MODELING OF SOFC SEALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koeppel, Brian J.; Vetrano, John S.; Nguyen, Ba Nghiep

    2008-03-26

    This study provides modeling tools for the design of reliable seals for SOFC stacks. The work consists of 1) experimental testing to determine fundamental properties of SOFC sealing materials, and 2) numerical modeling of stacks and sealing systems. The material tests capture relevant temperature-dependent physical and mechanical data needed by the analytical models such as thermal expansion, strength, fracture toughness, and relaxation behavior for glass-ceramic seals and other materials. Testing has been performed on both homogenous specimens and multiple material assemblies to investigate the effect of interfacial reactions. A viscoelastic continuum damage model for a glass-ceramic seal was developed tomore » capture the nonlinear behavior of this material at high temperatures. This model was implemented in the MSC MARC finite element code and was used for a detailed analysis of a planar SOFC stack under thermal cycling conditions. Realistic thermal loads for the stack were obtained using PNNL’s in-house multiphysics solver. The accumulated seal damage and component stresses were evaluated for multiple thermal loading cycles, and regions of high seal damage susceptible to cracking were identified. Selected test results, numerical model development, and analysis results will be presented.« less

  2. Prolonging fuel cell stack lifetime based on Pontryagin's Minimum Principle in fuel cell hybrid vehicles and its economic influence evaluation

    NASA Astrophysics Data System (ADS)

    Zheng, C. H.; Xu, G. Q.; Park, Y. I.; Lim, W. S.; Cha, S. W.

    2014-02-01

    The lifetime of fuel cell stacks is a major issue currently, especially for automotive applications. In order to take into account the lifetime of fuel cell stacks while considering the fuel consumption minimization in fuel cell hybrid vehicles (FCHVs), a Pontryagin's Minimum Principle (PMP)-based power management strategy is proposed in this research. This strategy has the effect of prolonging the lifetime of fuel cell stacks. However, there is a tradeoff between the fuel cell stack lifetime and the fuel consumption when this strategy is applied to an FCHV. Verifying the positive economic influence of this strategy is necessary in order to demonstrate its superiority. In this research, the economic influence of the proposed strategy is assessed according to an evaluating cost which is dependent on the fuel cell stack cost, the hydrogen cost, the fuel cell stack lifetime, and the lifetime prolonging impact on the fuel cell stack. Simulation results derived from the proposed power management strategy are also used to evaluate the economic influence. As a result, the positive economic influence of the proposed PMP-based power management strategy is proved for both current and future FCHVs.

  3. Modeling of Failure Mechanisms in Composites With Z-Pins-Damage Validation of Z-Pin Reinforced Co-Cured Composite Laminates

    DTIC Science & Technology

    2011-04-01

    there it is a computer implementation of the method just introduced. It uses Scilab ® programming language, and the Young modulus is calculated as final...laminate without Z-pins, its thickness, lamina stacking sequence and lamina’s engineering elastic constants, the second Scilab ® code can be used to find...EL thickness, the second Scilab ® code is employed once again; this time, though, a new Young’s modulus estimate would be produced. On the other hand

  4. Nonlinear empirical model of gas humidity-related voltage dynamics of a polymer-electrolyte-membrane fuel cell stack

    NASA Astrophysics Data System (ADS)

    Meiler, M.; Andre, D.; Schmid, O.; Hofer, E. P.

    Intelligent energy management is a cost-effective key path to realize efficient automotive drive trains [R. O'Hayre, S.W. Cha, W. Colella, F.B. Prinz. Fuel Cell Fundamentals, John Wiley & Sons, Hoboken, 2006]. To develop operating strategy in fuel cell drive trains, precise and computational efficient models of all system components, especially the fuel cell stack, are needed. Should these models further be used in diagnostic or control applications, then some major requirements must be fulfilled. First, the model must predict the mean fuel cell voltage very precisely in all possible operating conditions, even during transients. The model output should be as smooth as possible to support best efficient optimization strategies of the complete system. At least, the model must be computational efficient. For most applications, a difference between real fuel cell voltage and model output of less than 10 mV and 1000 calculations per second will be sufficient. In general, empirical models based on system identification offer a better accuracy and consume less calculation resources than detailed models derived from theoretical considerations [J. Larminie, A. Dicks. Fuel Cell Systems Explained, John Wiley & Sons, West Sussex, 2003]. In this contribution, the dynamic behaviour of the mean cell voltage of a polymer-electrolyte-membrane fuel cell (PEMFC) stack due to variations in humidity of cell's reactant gases is investigated. The validity of the overall model structure, a so-called general Hammerstein model (or Uryson model), was introduced recently in [M. Meiler, O. Schmid, M. Schudy, E.P. Hofer. Dynamic fuel cell stack model for real-time simulation based on system identification, J. Power Sources 176 (2007) 523-528]. Fuel cell mean voltage is calculated as the sum of a stationary and a dynamic voltage component. The stationary component of cell voltage is represented by a lookup-table and the dynamic voltage by a parallel placed, nonlinear transfer function. A suitable experimental setup to apply fast variations of gas humidity is introduced and is used to investigate a 10 cell PEMFC stack under various operation conditions. Using methods like stepwise multiple-regression a good mathematical description with reduced free parameters is achieved.

  5. A Survey on Multimedia-Based Cross-Layer Optimization in Visual Sensor Networks

    PubMed Central

    Costa, Daniel G.; Guedes, Luiz Affonso

    2011-01-01

    Visual sensor networks (VSNs) comprised of battery-operated electronic devices endowed with low-resolution cameras have expanded the applicability of a series of monitoring applications. Those types of sensors are interconnected by ad hoc error-prone wireless links, imposing stringent restrictions on available bandwidth, end-to-end delay and packet error rates. In such context, multimedia coding is required for data compression and error-resilience, also ensuring energy preservation over the path(s) toward the sink and improving the end-to-end perceptual quality of the received media. Cross-layer optimization may enhance the expected efficiency of VSNs applications, disrupting the conventional information flow of the protocol layers. When the inner characteristics of the multimedia coding techniques are exploited by cross-layer protocols and architectures, higher efficiency may be obtained in visual sensor networks. This paper surveys recent research on multimedia-based cross-layer optimization, presenting the proposed strategies and mechanisms for transmission rate adjustment, congestion control, multipath selection, energy preservation and error recovery. We note that many multimedia-based cross-layer optimization solutions have been proposed in recent years, each one bringing a wealth of contributions to visual sensor networks. PMID:22163908

  6. Analysis and optimization of solid oxide fuel cell-based auxiliary power units using a generic zero-dimensional fuel cell model

    NASA Astrophysics Data System (ADS)

    Göll, S.; Samsun, R. C.; Peters, R.

    Fuel-cell-based auxiliary power units can help to reduce fuel consumption and emissions in transportation. For this application, the combination of solid oxide fuel cells (SOFCs) with upstream fuel processing by autothermal reforming (ATR) is seen as a highly favorable configuration. Notwithstanding the necessity to improve each single component, an optimized architecture of the fuel cell system as a whole must be achieved. To enable model-based analyses, a system-level approach is proposed in which the fuel cell system is modeled as a multi-stage thermo-chemical process using the "flowsheeting" environment PRO/II™. Therein, the SOFC stack and the ATR are characterized entirely by corresponding thermodynamic processes together with global performance parameters. The developed model is then used to achieve an optimal system layout by comparing different system architectures. A system with anode and cathode off-gas recycling was identified to have the highest electric system efficiency. Taking this system as a basis, the potential for further performance enhancement was evaluated by varying four parameters characterizing different system components. Using methods from the design and analysis of experiments, the effects of these parameters and of their interactions were quantified, leading to an overall optimized system with encouraging performance data.

  7. ABINIT: Plane-Wave-Based Density-Functional Theory on High Performance Computers

    NASA Astrophysics Data System (ADS)

    Torrent, Marc

    2014-03-01

    For several years, a continuous effort has been produced to adapt electronic structure codes based on Density-Functional Theory to the future computing architectures. Among these codes, ABINIT is based on a plane-wave description of the wave functions which allows to treat systems of any kind. Porting such a code on petascale architectures pose difficulties related to the many-body nature of the DFT equations. To improve the performances of ABINIT - especially for what concerns standard LDA/GGA ground-state and response-function calculations - several strategies have been followed: A full multi-level parallelisation MPI scheme has been implemented, exploiting all possible levels and distributing both computation and memory. It allows to increase the number of distributed processes and could not be achieved without a strong restructuring of the code. The core algorithm used to solve the eigen problem (``Locally Optimal Blocked Congugate Gradient''), a Blocked-Davidson-like algorithm, is based on a distribution of processes combining plane-waves and bands. In addition to the distributed memory parallelization, a full hybrid scheme has been implemented, using standard shared-memory directives (openMP/openACC) or porting some comsuming code sections to Graphics Processing Units (GPU). As no simple performance model exists, the complexity of use has been increased; the code efficiency strongly depends on the distribution of processes among the numerous levels. ABINIT is able to predict the performances of several process distributions and automatically choose the most favourable one. On the other hand, a big effort has been carried out to analyse the performances of the code on petascale architectures, showing which sections of codes have to be improved; they all are related to Matrix Algebra (diagonalisation, orthogonalisation). The different strategies employed to improve the code scalability will be described. They are based on an exploration of new diagonalization algorithm, as well as the use of external optimized librairies. Part of this work has been supported by the european Prace project (PaRtnership for Advanced Computing in Europe) in the framework of its workpackage 8.

  8. Quantum-mechanical analysis of the energetic contributions to π stacking in nucleic acids versus rise, twist, and slide.

    PubMed

    Parker, Trent M; Hohenstein, Edward G; Parrish, Robert M; Hud, Nicholas V; Sherrill, C David

    2013-01-30

    Symmetry-adapted perturbation theory (SAPT) is applied to pairs of hydrogen-bonded nucleobases to obtain the energetic components of base stacking (electrostatic, exchange-repulsion, induction/polarization, and London dispersion interactions) and how they vary as a function of the helical parameters Rise, Twist, and Slide. Computed average values of Rise and Twist agree well with experimental data for B-form DNA from the Nucleic Acids Database, even though the model computations omitted the backbone atoms (suggesting that the backbone in B-form DNA is compatible with having the bases adopt their ideal stacking geometries). London dispersion forces are the most important attractive component in base stacking, followed by electrostatic interactions. At values of Rise typical of those in DNA (3.36 Å), the electrostatic contribution is nearly always attractive, providing further evidence for the importance of charge-penetration effects in π-π interactions (a term neglected in classical force fields). Comparison of the computed stacking energies with those from model complexes made of the "parent" nucleobases purine and 2-pyrimidone indicates that chemical substituents in DNA and RNA account for 20-40% of the base-stacking energy. A lack of correspondence between the SAPT results and experiment for Slide in RNA base-pair steps suggests that the backbone plays a larger role in determining stacking geometries in RNA than in B-form DNA. In comparisons of base-pair steps with thymine versus uracil, the thymine methyl group tends to enhance the strength of the stacking interaction through a combination of dispersion and electrosatic interactions.

  9. Is QR code an optimal data container in optical encryption systems from an error-correction coding perspective?

    PubMed

    Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia

    2018-01-01

    Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.

  10. Design and Optimization of Composite Automotive Hatchback Using Integrated Material-Structure-Process-Performance Method

    NASA Astrophysics Data System (ADS)

    Yang, Xudong; Sun, Lingyu; Zhang, Cheng; Li, Lijun; Dai, Zongmiao; Xiong, Zhenkai

    2018-03-01

    The application of polymer composites as a substitution of metal is an effective approach to reduce vehicle weight. However, the final performance of composite structures is determined not only by the material types, structural designs and manufacturing process, but also by their mutual restrict. Hence, an integrated "material-structure-process-performance" method is proposed for the conceptual and detail design of composite components. The material selection is based on the principle of composite mechanics such as rule of mixture for laminate. The design of component geometry, dimension and stacking sequence is determined by parametric modeling and size optimization. The selection of process parameters are based on multi-physical field simulation. The stiffness and modal constraint conditions were obtained from the numerical analysis of metal benchmark under typical load conditions. The optimal design was found by multi-discipline optimization. Finally, the proposed method was validated by an application case of automotive hatchback using carbon fiber reinforced polymer. Compared with the metal benchmark, the weight of composite one reduces 38.8%, simultaneously, its torsion and bending stiffness increases 3.75% and 33.23%, respectively, and the first frequency also increases 44.78%.

  11. A rapid sample screening method for authenticity control of whiskey using capillary electrophoresis with online preconcentration.

    PubMed

    Heller, Melina; Vitali, Luciano; Oliveira, Marcone Augusto Leal; Costa, Ana Carolina O; Micke, Gustavo Amadeu

    2011-07-13

    The present study aimed to develop a methodology using capillary electrophoresis for the determination of sinapaldehyde, syringaldehyde, coniferaldehyde, and vanillin in whiskey samples. The main objective was to obtain a screening method to differentiate authentic samples from seized samples suspected of being false using the phenolic aldehydes as chemical markers. The optimized background electrolyte was composed of 20 mmol L(-1) sodium tetraborate with 10% MeOH at pH 9.3. The study examined two kinds of sample stacking, using a long-end injection mode: normal sample stacking (NSM) and sample stacking with matrix removal (SWMR). In SWMR, the optimized injection time of the samples was 42 s (SWMR42); at this time, no matrix effects were observed. Values of r were >0.99 for the both methods. The LOD and LOQ were better than 100 and 330 mg mL(-1) for NSM and better than 22 and 73 mg L(-1) for SWMR. The CE-UV reliability in the aldehyde analysis in the real sample was compared statistically with LC-MS/MS methodology, and no significant differences were found, with a 95% confidence interval between the methodologies.

  12. Combination of micelle collapse and field-amplified sample stacking in capillary electrophoresis for determination of trimethoprim and sulfamethoxazole in animal-originated foodstuffs.

    PubMed

    Liu, Lihong; Wan, Qian; Xu, Xiaoying; Duan, Shunshan; Yang, Chunli

    2017-03-15

    An on-line preconcentration method combining micelle to solvent stacking (MSS) with field-amplified sample stacking (FASS) was employed for the analysis of trimethoprim (TMP) and sulfamethoxazole (SMZ) by capillary zone electrophoresis (CZE). The optimized experimental conditions were as followings: (1) sample matrix, 10.0mM SDS-5% (v/v) methanol; (2) trapping solution (TS), 35mM H 3 PO 4 -60% acetonitrile (CH 3 CN); (3) running buffer, 30mM Na 2 HPO 4 (pH=7.3); (4) sample solution volume, 168nL; TS volume, 168nL; and (5) 9kV voltage, 214nm UV detection. Under the optimized conditions, the limits of detection (LODs) for SMZ and TMP were 7.7 and 8.5ng/mL, and they were 301 and 329 times better compared to a typical injection, respectively. The contents of TMP and SMZ in animal foodstuffs such as dairy products, eggs and honey were analyzed, too. Recoveries of 80-104% were acquired with relative standard deviations of 0.5-5.4%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Software-Defined Architectures for Spectrally Efficient Cognitive Networking in Extreme Environments

    NASA Astrophysics Data System (ADS)

    Sklivanitis, Georgios

    The objective of this dissertation is the design, development, and experimental evaluation of novel algorithms and reconfigurable radio architectures for spectrally efficient cognitive networking in terrestrial, airborne, and underwater environments. Next-generation wireless communication architectures and networking protocols that maximize spectrum utilization efficiency in congested/contested or low-spectral availability (extreme) communication environments can enable a rich body of applications with unprecedented societal impact. In recent years, underwater wireless networks have attracted significant attention for military and commercial applications including oceanographic data collection, disaster prevention, tactical surveillance, offshore exploration, and pollution monitoring. Unmanned aerial systems that are autonomously networked and fully mobile can assist humans in extreme or difficult-to-reach environments and provide cost-effective wireless connectivity for devices without infrastructure coverage. Cognitive radio (CR) has emerged as a promising technology to maximize spectral efficiency in dynamically changing communication environments by adaptively reconfiguring radio communication parameters. At the same time, the fast developing technology of software-defined radio (SDR) platforms has enabled hardware realization of cognitive radio algorithms for opportunistic spectrum access. However, existing algorithmic designs and protocols for shared spectrum access do not effectively capture the interdependencies between radio parameters at the physical (PHY), medium-access control (MAC), and network (NET) layers of the network protocol stack. In addition, existing off-the-shelf radio platforms and SDR programmable architectures are far from fulfilling runtime adaptation and reconfiguration across PHY, MAC, and NET layers. Spectrum allocation in cognitive networks with multi-hop communication requirements depends on the location, network traffic load, and interference profile at each network node. As a result, the development and implementation of algorithms and cross-layer reconfigurable radio platforms that can jointly treat space, time, and frequency as a unified resource to be dynamically optimized according to inter- and intra-network interference constraints is of fundamental importance. In the next chapters, we present novel algorithmic and software/hardware implementation developments toward the deployment of spectrally efficient terrestrial, airborne, and underwater wireless networks. In Chapter 1 we review the state-of-art in commercially available SDR platforms, describe their software and hardware capabilities, and classify them based on their ability to enable rapid prototyping and advance experimental research in wireless networks. Chapter 2 discusses system design and implementation details toward real-time evaluation of a software-radio platform for all-spectrum cognitive channelization in the presence of narrowband or wideband primary stations. All-spectrum channelization is achieved by designing maximum signal-to-interference-plus-noise ratio (SINR) waveforms that span the whole continuum of the device-accessible spectrum, while satisfying peak power and interference temperature (IT) constraints for the secondary and primary users, respectively. In Chapter 3, we introduce the concept of all-spectrum channelization based on max-SINR optimized sparse-binary waveforms, we propose optimal and suboptimal waveform design algorithms, and evaluate their SINR and bit-error-rate (BER) performance in an SDR testbed. Chapter 4 considers the problem of channel estimation with minimal pilot signaling in multi-cell multi-user multi-input multi-output (MIMO) systems with very large antenna arrays at the base station, and proposes a least-squares (LS)-type algorithm that iteratively extracts channel and data estimates from a short record of data measurements. Our algorithmic developments toward spectrally-efficient cognitive networking through joint optimization of channel access code-waveforms and routes in a multi-hop network are described in Chapter 5. Algorithmic designs are software optimized on heterogeneous multi-core general-purpose processor (GPP)-based SDR architectures by leveraging a novel software-radio framework that offers self-optimization and real-time adaptation capabilities at the PHY, MAC, and NET layers of the network protocol stack. Our system design approach is experimentally validated under realistic conditions in a large-scale hybrid ground-air testbed deployment. Chapter 6 reviews the state-of-art in software and hardware platforms for underwater wireless networking and proposes a software-defined acoustic modem prototype that enables (i) cognitive reconfiguration of PHY/MAC parameters, and (ii) cross-technology communication adaptation. The proposed modem design is evaluated in terms of effective communication data rate in both water tank and lake testbed setups. In Chapter 7, we present a novel receiver configuration for code-waveform-based multiple-access underwater communications. The proposed receiver is fully reconfigurable and executes (i) all-spectrum cognitive channelization, and (ii) combined synchronization, channel estimation, and demodulation. Experimental evaluation in terms of SINR and BER show that all-spectrum channelization is a powerful proposition for underwater communications. At the same time, the proposed receiver design can significantly enhance bandwidth utilization. Finally, in Chapter 8, we focus on challenging practical issues that arise in underwater acoustic sensor network setups where co-located multi-antenna sensor deployment is not feasible due to power, computation, and hardware limitations, and design, implement, and evaluate an underwater receiver structure that accounts for multiple carrier frequency and timing offsets in virtual (distributed) MIMO underwater systems.

  14. Graphite-based photovoltaic cells

    DOEpatents

    Lagally, Max; Liu, Feng

    2010-12-28

    The present invention uses lithographically patterned graphite stacks as the basic building elements of an efficient and economical photovoltaic cell. The basic design of the graphite-based photovoltaic cells includes a plurality of spatially separated graphite stacks, each comprising a plurality of vertically stacked, semiconducting graphene sheets (carbon nanoribbons) bridging electrically conductive contacts.

  15. A step-by-step solution for embedding user-controlled cines into educational Web pages.

    PubMed

    Cornfeld, Daniel

    2008-03-01

    The objective of this article is to introduce a simple method for embedding user-controlled cines into a Web page using a simple JavaScript. Step-by-step instructions are included and the source code is made available. This technique allows the creation of portable Web pages that allow the user to scroll through cases as if seated at a PACS workstation. A simple JavaScript allows scrollable image stacks to be included on Web pages. With this technique, you can quickly and easily incorporate entire stacks of CT or MR images into online teaching files. This technique has the potential for use in case presentations, online didactics, teaching archives, and resident testing.

  16. Investigation of Navier-Stokes Code Verification and Design Optimization

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar

    2004-01-01

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization study is carried out using a geometric mean approach. Following this, sensitivity analyses with the aid of variance-based non-parametric approach and partial correlation coefficients are conducted using data available from surrogate models of the objectives and the multi-objective optima to identify the contribution of the design variables to the objective variability and to analyze the variability of the design variables and the objectives. In summary the present dissertation offers insight into an improved coarse to fine grid extrapolation technique for Navier-Stokes computations and also suggests tools for a designer to conduct design optimization study and related sensitivity analyses for a given design problem.

  17. Optimization of a matched-filter receiver for frequency hopping code acquisition in jamming

    NASA Astrophysics Data System (ADS)

    Pawlowski, P. R.; Polydoros, A.

    A matched-filter receiver for frequency hopping (FH) code acquisition is optimized when either partial-band tone jamming or partial-band Gaussian noise jamming is present. The receiver is matched to a segment of the FH code sequence, sums hard per-channel decisions to form a test, and uses multiple tests to verify acquisition. The length of the matched filter and the number of verification tests are fixed. Optimization is then choosing thresholds to maximize performance based upon the receiver's degree of knowledge about the jammer ('side-information'). Four levels of side-information are considered, ranging from none to complete. The latter level results in a constant-false-alarm-rate (CFAR) design. At each level, performance sensitivity to threshold choice is analyzed. Robust thresholds are chosen to maximize performance as the jammer varies its power distribution, resulting in simple design rules which aid threshold selection. Performance results, which show that optimum distributions for the jammer power over the total FH bandwidth exist, are presented.

  18. The MCUCN simulation code for ultracold neutron physics

    NASA Astrophysics Data System (ADS)

    Zsigmond, G.

    2018-02-01

    Ultracold neutrons (UCN) have very low kinetic energies 0-300 neV, thereby can be stored in specific material or magnetic confinements for many hundreds of seconds. This makes them a very useful tool in probing fundamental symmetries of nature (for instance charge-parity violation by neutron electric dipole moment experiments) and contributing important parameters for the Big Bang nucleosynthesis (neutron lifetime measurements). Improved precision experiments are in construction at new and planned UCN sources around the world. MC simulations play an important role in the optimization of such systems with a large number of parameters, but also in the estimation of systematic effects, in benchmarking of analysis codes, or as part of the analysis. The MCUCN code written at PSI has been extensively used for the optimization of the UCN source optics and in the optimization and analysis of (test) experiments within the nEDM project based at PSI. In this paper we present the main features of MCUCN and interesting benchmark and application examples.

  19. Common data buffer

    NASA Technical Reports Server (NTRS)

    Byrne, F.

    1981-01-01

    Time-shared interface speeds data processing in distributed computer network. Two-level high-speed scanning approach routes information to buffer, portion of which is reserved for series of "first-in, first-out" memory stacks. Buffer address structure and memory are protected from noise or failed components by error correcting code. System is applicable to any computer or processing language.

  20. Pattern Recognition of Momentary Mental Workload Based on Multi-Channel Electrophysiological Data and Ensemble Convolutional Neural Networks.

    PubMed

    Zhang, Jianhua; Li, Sunan; Wang, Rubin

    2017-01-01

    In this paper, we deal with the Mental Workload (MWL) classification problem based on the measured physiological data. First we discussed the optimal depth (i.e., the number of hidden layers) and parameter optimization algorithms for the Convolutional Neural Networks (CNN). The base CNNs designed were tested according to five classification performance indices, namely Accuracy, Precision, F-measure, G-mean, and required training time. Then we developed an Ensemble Convolutional Neural Network (ECNN) to enhance the accuracy and robustness of the individual CNN model. For the ECNN design, three model aggregation approaches (weighted averaging, majority voting and stacking) were examined and a resampling strategy was used to enhance the diversity of individual CNN models. The results of MWL classification performance comparison indicated that the proposed ECNN framework can effectively improve MWL classification performance and is featured by entirely automatic feature extraction and MWL classification, when compared with traditional machine learning methods.

  1. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks

    PubMed Central

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-01-01

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668

  2. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    PubMed

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  3. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction

    PubMed Central

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems. PMID:27814367

  4. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.

    PubMed

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.

  5. The effect of code expanding optimizations on instruction cache design

    NASA Technical Reports Server (NTRS)

    Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.

    1991-01-01

    It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.

  6. Noise activated bistable sensor based on chaotic system with output defined by temporal coding and firing rate

    NASA Astrophysics Data System (ADS)

    Korneta, Wojciech; Gomes, Iacyel

    2017-11-01

    Traditional bistable sensors use external bias signal to drive its response between states and their detection strategy is based on the output power spectral density or the residence time difference (RTD) in two sensor states. Recently, the noise activated nonlinear dynamic sensors driven only by noise based on RTD technique have been proposed. Here, we present experimental results of dc voltage measurements by noise-driven bistable sensor based on electronic Chua's circuit operating in a chaotic regime where two single scroll attractors coexist. The output of the sensor is quantified by the proportion of the time the sensor stays in one state to the total observation time and by the spike-count rate with spikes defined by crossings between attractors. The relationship between the stimuli and particular observable for different noise intensities is obtained, the usefulness of each coding scheme is discussed, and the optimal noise intensity for detection is indicated. It is shown that the obtained relationship is the same for any observation time when population coding is used. The optimal time window for both detection and the number of units in population coding is found. Our results may be useful for analyses and understanding of the neural activity and in designing bistable storage elements at length scales where thermal fluctuations drastically increase and the effect of noise must be taken into consideration.

  7. Charge splitters and charge transport junctions based on guanine quadruplexes

    NASA Astrophysics Data System (ADS)

    Sha, Ruojie; Xiang, Limin; Liu, Chaoren; Balaeff, Alexander; Zhang, Yuqi; Zhang, Peng; Li, Yueqi; Beratan, David N.; Tao, Nongjian; Seeman, Nadrian C.

    2018-04-01

    Self-assembling circuit elements, such as current splitters or combiners at the molecular scale, require the design of building blocks with three or more terminals. A promising material for such building blocks is DNA, wherein multiple strands can self-assemble into multi-ended junctions, and nucleobase stacks can transport charge over long distances. However, nucleobase stacking is often disrupted at junction points, hindering electric charge transport between the two terminals of the junction. Here, we show that a guanine-quadruplex (G4) motif can be used as a connector element for a multi-ended DNA junction. By attaching specific terminal groups to the motif, we demonstrate that charges can enter the structure from one terminal at one end of a three-way G4 motif, and can exit from one of two terminals at the other end with minimal carrier transport attenuation. Moreover, we study four-way G4 junction structures by performing theoretical calculations to assist in the design and optimization of these connectors.

  8. Giant perpendicular exchange bias with antiferromagnetic MnN

    NASA Astrophysics Data System (ADS)

    Zilske, P.; Graulich, D.; Dunz, M.; Meinert, M.

    2017-05-01

    We investigated an out-of-plane exchange bias system that is based on the antiferromagnet MnN. Polycrystalline, highly textured film stacks of Ta/MnN/CoFeB/MgO/Ta were grown on SiOx by (reactive) magnetron sputtering and studied by x-ray diffraction and Kerr magnetometry. Nontrivial modifications of the exchange bias and the perpendicular magnetic anisotropy were observed as functions of both film thicknesses and field cooling temperatures. In optimized film stacks, a giant perpendicular exchange bias of 3600 Oe and a coercive field of 350 Oe were observed at room temperature. The effective interfacial exchange energy is estimated to be Jeff = 0.24 mJ/m2 and the effective uniaxial anisotropy constant of the antiferromagnet is Keff = 24 kJ/m3. The maximum effective perpendicular anisotropy field of the CoFeB layer is Hani = 3400 Oe. These values are larger than any previously reported values. These results possibly open a route to magnetically stable, exchange biased perpendicularly magnetized spin valves.

  9. Optimization of chassis reallocation in doublestack container transportation system

    DOT National Transportation Integrated Search

    1995-08-01

    Cost efficiencies associated with double stacking truck containers on flatbed railcars have motivated carriers to increase their involvement in intermodal freight transportation. However, container-on-flatcar (COFC) service in rail-truck environments...

  10. Channeling of electron transport to improve collection efficiency in mesoporous titanium dioxide dye sensitized solar cell stacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fakharuddin, Azhar; Ahmed, Irfan; Yusoff, Mashitah M.

    2014-02-03

    Dye-sensitized solar cell (DSC) modules are generally made by interconnecting large photoelectrode strips with optimized thickness (∼14 μm) and show lower current density (J{sub SC}) compared with their single cells. We found out that the key to achieving higher J{sub SC} in large area devices is optimized photoelectrode volume (V{sub D}), viz., thickness and area which facilitate the electron channeling towards working electrode. By imposing constraints on electronic path in a DSC stack, we achieved >50% increased J{sub SC} and ∼60% increment in photoelectric conversion efficiency in photoelectrodes of similar V{sub D} (∼3.36 × 10{sup −4} cm{sup 3}) without using any metallic gridmore » or a special interconnections.« less

  11. Design and optimization of a portable LQCD Monte Carlo code using OpenACC

    NASA Astrophysics Data System (ADS)

    Bonati, Claudio; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Calore, Enrico; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele

    The present panorama of HPC architectures is extremely heterogeneous, ranging from traditional multi-core CPU processors, supporting a wide class of applications but delivering moderate computing performance, to many-core Graphics Processor Units (GPUs), exploiting aggressive data-parallelism and delivering higher performances for streaming computing applications. In this scenario, code portability (and performance portability) become necessary for easy maintainability of applications; this is very relevant in scientific computing where code changes are very frequent, making it tedious and prone to error to keep different code versions aligned. In this work, we present the design and optimization of a state-of-the-art production-level LQCD Monte Carlo application, using the directive-based OpenACC programming model. OpenACC abstracts parallel programming to a descriptive level, relieving programmers from specifying how codes should be mapped onto the target architecture. We describe the implementation of a code fully written in OpenAcc, and show that we are able to target several different architectures, including state-of-the-art traditional CPUs and GPUs, with the same code. We also measure performance, evaluating the computing efficiency of our OpenACC code on several architectures, comparing with GPU-specific implementations and showing that a good level of performance-portability can be reached.

  12. Systolic array processing of the sequential decoding algorithm

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  13. Experimental study of an optimized PSP-OSTBC scheme with m-PPM in ultraviolet scattering channel for optical MIMO system.

    PubMed

    Han, Dahai; Gu, Yanjie; Zhang, Min

    2017-08-10

    An optimized scheme of pulse symmetrical position-orthogonal space-time block codes (PSP-OSTBC) is proposed and applied with m-pulse positions modulation (m-PPM) without the use of a complex decoding algorithm in an optical multi-input multi-output (MIMO) ultraviolet (UV) communication system. The proposed scheme breaks through the limitation of the traditional Alamouti code and is suitable for high-order m-PPM in a UV scattering channel, verified by both simulation experiments and field tests with specific parameters. The performances of 1×1, 2×1, and 2×2 PSP-OSTBC systems with 4-PPM are compared experimentally as the optimal tradeoff between modification and coding in practical application. Meanwhile, the feasibility of the proposed scheme for 8-PPM is examined by a simulation experiment as well. The results suggest that the proposed scheme makes the system insensitive to the influence of path loss with a larger channel capacity, and a higher diversity gain and coding gain with a simple decoding algorithm will be achieved by employing the orthogonality of m-PPM in an optical-MIMO-based ultraviolet scattering channel.

  14. Theoretical Evidence for the Stronger Ability of Thymine to Disperse SWCNT than Cytosine and Adenine: self-stacking of DNA bases vs their cross-stacking with SWCNT

    PubMed Central

    Wang, Yixuan

    2008-01-01

    Self-stacking of four DNA bases, adenine (A), cytosine (C), guanine (G) and thymine (T), and their cross-stacking with (5,5) as well as (10,0) single walled carbon nanotubes (SWCNTs) were extensively investigated with a novel hybrid DFT method, MPWB1K/cc-pVDZ. The binding energies were further corrected with MP2/6-311++G(d,p) method in both gas phase and aqueous solution, where the solvent effects were included with conductor-like polarized continuum model (CPCM) model and UAHF radii. The strongest self-stacking of G and A takes displaced anti-parallel configuration, but un-displaced or “eclipsed” anti-parallel configuration is the most stable for C and T. In gas phase the self-stacking of nucleobases decreases in the sequence G>A>C>T, while because of quite different solvent effects their self-stacking in aqueous solution exhibits a distinct sequence A>G>T>C. For a given base, cross-stacking is stronger than self-stacking in both gas phase and aqueous solution. Binding energy for cross-stacking in gas phase varies as G>A>T>C for both (10,0) and (5,5) SWCNTs, and the binding of four nucleobases to (10,0) is slightly stronger than to (5,5) SWCNT by a range of 0.1–0.5 kcal/mol. The cross-stacking in aqueous solution varies differently from that gas phase: A>G>T>C for (10,0) SWCNT and G>A>T>C for (5,5) SWCNT. It is suggested that the ability of nucleobases to disperse SWCNT depends on relative strength (ΔΔEbinsol) of self-stacking and cross-stacking with SWCNT in aqueous solution. Of the four investigated nucleobases thymine (T) exhibits the highest (ΔΔEbinsol) which can well explain the experimental finding that T more efficiently functionalizes SWCNT than C and A. PMID:18946514

  15. Traleika Glacier X-Stack Extension Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fryman, Joshua

    The XStack Extension Project continued along the direction of the XStack program in exploring the software tools and frameworks to support a task-based community runtime towards the goal of Exascale programming. The momentum built as part of the XStack project, with the development of the task-based Open Community Runtime (OCR) and related tools, was carried through during the XStack Extension with the focus areas of easing application development, improving performance and supporting more features. The infrastructure set up for a community-driven open-source development continued to be used towards these areas, with continued co-development of runtime and applications. A variety ofmore » OCR programming environments were studied, as described in Sections Revolutionary Programming Environments & Applications – to assist with application development on OCR, and we develop OCR Translator, a ROSE-based source-to-source compiler that parses high-level annotations in an MPI program to generate equivalent OCR code. Figure 2 compares the number of OCR objects needed to generate the 2D stencil workload using the translator, against manual approaches based on SPMD library or native coding. The rate of increase with the translator, with an increase in number of ranks, is consistent with other approaches. This is explored further in Section OCR Translator.« less

  16. DUKSUP: A Computer Program for High Thrust Launch Vehicle Trajectory Design and Optimization

    NASA Technical Reports Server (NTRS)

    Williams, C. H.; Spurlock, O. F.

    2014-01-01

    From the late 1960's through 1997, the leadership of NASA's Intermediate and Large class unmanned expendable launch vehicle projects resided at the NASA Lewis (now Glenn) Research Center (LeRC). One of LeRC's primary responsibilities --- trajectory design and performance analysis --- was accomplished by an internally-developed analytic three dimensional computer program called DUKSUP. Because of its Calculus of Variations-based optimization routine, this code was generally more capable of finding optimal solutions than its contemporaries. A derivation of optimal control using the Calculus of Variations is summarized including transversality, intermediate, and final conditions. The two point boundary value problem is explained. A brief summary of the code's operation is provided, including iteration via the Newton-Raphson scheme and integration of variational and motion equations via a 4th order Runge-Kutta scheme. Main subroutines are discussed. The history of the LeRC trajectory design efforts in the early 1960's is explained within the context of supporting the Centaur upper stage program. How the code was constructed based on the operation of the Atlas/Centaur launch vehicle, the limits of the computers of that era, the limits of the computer programming languages, and the missions it supported are discussed. The vehicles DUKSUP supported (Atlas/Centaur, Titan/Centaur, and Shuttle/Centaur) are briefly described. The types of missions, including Earth orbital and interplanetary, are described. The roles of flight constraints and their impact on launch operations are detailed (such as jettisoning hardware on heating, Range Safety, ground station tracking, and elliptical parking orbits). The computer main frames on which the code was hosted are described. The applications of the code are detailed, including independent check of contractor analysis, benchmarking, leading edge analysis, and vehicle performance improvement assessments. Several of DUKSUP's many major impacts on launches are discussed including Intelsat, Voyager, Pioneer Venus, HEAO, Galileo, and Cassini.

  17. DUKSUP: A Computer Program for High Thrust Launch Vehicle Trajectory Design and Optimization

    NASA Technical Reports Server (NTRS)

    Spurlock, O. Frank; Williams, Craig H.

    2015-01-01

    From the late 1960s through 1997, the leadership of NASAs Intermediate and Large class unmanned expendable launch vehicle projects resided at the NASA Lewis (now Glenn) Research Center (LeRC). One of LeRCs primary responsibilities --- trajectory design and performance analysis --- was accomplished by an internally-developed analytic three dimensional computer program called DUKSUP. Because of its Calculus of Variations-based optimization routine, this code was generally more capable of finding optimal solutions than its contemporaries. A derivation of optimal control using the Calculus of Variations is summarized including transversality, intermediate, and final conditions. The two point boundary value problem is explained. A brief summary of the codes operation is provided, including iteration via the Newton-Raphson scheme and integration of variational and motion equations via a 4th order Runge-Kutta scheme. Main subroutines are discussed. The history of the LeRC trajectory design efforts in the early 1960s is explained within the context of supporting the Centaur upper stage program. How the code was constructed based on the operation of the AtlasCentaur launch vehicle, the limits of the computers of that era, the limits of the computer programming languages, and the missions it supported are discussed. The vehicles DUKSUP supported (AtlasCentaur, TitanCentaur, and ShuttleCentaur) are briefly described. The types of missions, including Earth orbital and interplanetary, are described. The roles of flight constraints and their impact on launch operations are detailed (such as jettisoning hardware on heating, Range Safety, ground station tracking, and elliptical parking orbits). The computer main frames on which the code was hosted are described. The applications of the code are detailed, including independent check of contractor analysis, benchmarking, leading edge analysis, and vehicle performance improvement assessments. Several of DUKSUPs many major impacts on launches are discussed including Intelsat, Voyager, Pioneer Venus, HEAO, Galileo, and Cassini.

  18. An integrated optimum design approach for high speed prop rotors

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Mccarthy, Thomas R.

    1995-01-01

    The objective is to develop an optimization procedure for high-speed and civil tilt-rotors by coupling all of the necessary disciplines within a closed-loop optimization procedure. Both simplified and comprehensive analysis codes are used for the aerodynamic analyses. The structural properties are calculated using in-house developed algorithms for both isotropic and composite box beam sections. There are four major objectives of this study. (1) Aerodynamic optimization: The effects of blade aerodynamic characteristics on cruise and hover performance of prop-rotor aircraft are investigated using the classical blade element momentum approach with corrections for the high lift capability of rotors/propellers. (2) Coupled aerodynamic/structures optimization: A multilevel hybrid optimization technique is developed for the design of prop-rotor aircraft. The design problem is decomposed into a level for improved aerodynamics with continuous design variables and a level with discrete variables to investigate composite tailoring. The aerodynamic analysis is based on that developed in objective 1 and the structural analysis is performed using an in-house code which models a composite box beam. The results are compared to both a reference rotor and the optimum rotor found in the purely aerodynamic formulation. (3) Multipoint optimization: The multilevel optimization procedure of objective 2 is extended to a multipoint design problem. Hover, cruise, and take-off are the three flight conditions simultaneously maximized. (4) Coupled rotor/wing optimization: Using the comprehensive rotary wing code CAMRAD, an optimization procedure is developed for the coupled rotor/wing performance in high speed tilt-rotor aircraft. The developed procedure contains design variables which define the rotor and wing planforms.

  19. Numerical method to optimize the polar-azimuthal orientation of infrared superconducting-nanowire single-photon detectors.

    PubMed

    Csete, Mária; Sipos, Áron; Najafi, Faraz; Hu, Xiaolong; Berggren, Karl K

    2011-11-01

    A finite-element method for calculating the illumination-dependence of absorption in three-dimensional nanostructures is presented based on the radio frequency module of the Comsol Multiphysics software package (Comsol AB). This method is capable of numerically determining the optical response and near-field distribution of subwavelength periodic structures as a function of illumination orientations specified by polar angle, φ, and azimuthal angle, γ. The method was applied to determine the illumination-angle-dependent absorptance in cavity-based superconducting-nanowire single-photon detector (SNSPD) designs. Niobium-nitride stripes based on dimensions of conventional SNSPDs and integrated with ~ quarter-wavelength hydrogen-silsesquioxane-filled nano-optical cavity and covered by a thin gold film acting as a reflector were illuminated from below by p-polarized light in this study. The numerical results were compared to results from complementary transfer-matrix-method calculations on composite layers made of analogous film-stacks. This comparison helped to uncover the optical phenomena contributing to the appearance of extrema in the optical response. This paper presents an approach to optimizing the absorptance of different sensing and detecting devices via simultaneous numerical optimization of the polar and azimuthal illumination angles. © 2011 Optical Society of America

  20. Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.

    PubMed

    Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin

    2005-03-01

    This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.

  1. HERCULES: A Pattern Driven Code Transformation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing

    2012-01-01

    New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss themore » design, implementation and an initial evaluation of HERCULES.« less

  2. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  3. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    NASA Technical Reports Server (NTRS)

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  4. Fuel management optimization using genetic algorithms and expert knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1996-09-01

    The CIGARO fuel management optimization code based on genetic algorithms is described and tested. The test problem optimized the core lifetime for a pressurized water reactor with a penalty function constraint on the peak normalized power. A bit-string genotype encoded the loading patterns, and genotype bias was reduced with additional bits. Expert knowledge about fuel management was incorporated into the genetic algorithm. Regional crossover exchanged physically adjacent fuel assemblies and improved the optimization slightly. Biasing the initial population toward a known priority table significantly improved the optimization.

  5. Matrix-Free Polynomial-Based Nonlinear Least Squares Optimized Preconditioning and its Application to Discontinuous Galerkin Discretizations of the Euler Equations

    DTIC Science & Technology

    2015-06-01

    cient parallel code for applying the operator. Our method constructs a polynomial preconditioner using a nonlinear least squares (NLLS) algorithm. We show...apply the underlying operator. Such a preconditioner can be very attractive in scenarios where one has a highly efficient parallel code for applying...repeatedly solve a large system of linear equations where one has an extremely fast parallel code for applying an underlying fixed linear operator

  6. Computer code for controller partitioning with IFPC application: A user's manual

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip H.; Yarkhan, Asim

    1994-01-01

    A user's manual for the computer code for partitioning a centralized controller into decentralized subcontrollers with applicability to Integrated Flight/Propulsion Control (IFPC) is presented. Partitioning of a centralized controller into two subcontrollers is described and the algorithm on which the code is based is discussed. The algorithm uses parameter optimization of a cost function which is described. The major data structures and functions are described. Specific instructions are given. The user is led through an example of an IFCP application.

  7. A unified framework of unsupervised subjective optimized bit allocation for multiple video object coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi

    2005-10-01

    MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.

  8. The WISGSK: A computer code for the prediction of a multistage axial compressor performance with water ingestion

    NASA Technical Reports Server (NTRS)

    Tsuchiya, T.; Murthy, S. N. B.

    1982-01-01

    A computer code is presented for the prediction of off-design axial flow compressor performance with water ingestion. Four processes were considered to account for the aero-thermo-mechanical interactions during operation with air-water droplet mixture flow: (1) blade performance change, (2) centrifuging of water droplets, (3) heat and mass transfer process between the gaseous and the liquid phases and (4) droplet size redistribution due to break-up. Stage and compressor performance are obtained by a stage stacking procedure using representative veocity diagrams at a rotor inlet and outlet mean radii. The Code has options for performance estimation with (1) mixtures of gas and (2) gas-water droplet mixtures, and therefore can take into account the humidity present in ambient conditions. A test case illustrates the method of using the Code. The Code follows closely the methodology and architecture of the NASA-STGSTK Code for the estimation of axial-flow compressor performance with air flow.

  9. Particle-in-cell code library for numerical simulation of the ECR source plasma

    NASA Astrophysics Data System (ADS)

    Shirkov, G.; Alexandrov, V.; Preisendorf, V.; Shevtsov, V.; Filippov, A.; Komissarov, R.; Mironov, V.; Shirkova, E.; Strekalovsky, O.; Tokareva, N.; Tuzikov, A.; Vatulin, V.; Vasina, E.; Fomin, V.; Anisimov, A.; Veselov, R.; Golubev, A.; Grushin, S.; Povyshev, V.; Sadovoi, A.; Donskoi, E.; Nakagawa, T.; Yano, Y.

    2003-05-01

    The project ;Numerical simulation and optimization of ion accumulation and production in multicharged ion sources; is funded by the International Science and Technology Center (ISTC). A summary of recent project development and the first version of a computer code library for simulation of electron-cyclotron resonance (ECR) source plasmas based on the particle-in-cell method are presented.

  10. EGG: Empirical Galaxy Generator

    NASA Astrophysics Data System (ADS)

    Schreiber, C.; Elbaz, D.; Pannella, M.; Merlin, E.; Castellano, M.; Fontana, A.; Bourne, N.; Boutsia, K.; Cullen, F.; Dunlop, J.; Ferguson, H. C.; Michałowski, M. J.; Okumura, K.; Santini, P.; Shu, X. W.; Wang, T.; White, C.

    2018-04-01

    The Empirical Galaxy Generator (EGG) generates fake galaxy catalogs and images with realistic positions, morphologies and fluxes from the far-ultraviolet to the far-infrared. The catalogs are generated by egg-gencat and stored in binary FITS tables (column oriented). Another program, egg-2skymaker, is used to convert the generated catalog into ASCII tables suitable for ingestion by SkyMaker (ascl:1010.066) to produce realistic high resolution images (e.g., Hubble-like), while egg-gennoise and egg-genmap can be used to generate the low resolution images (e.g., Herschel-like). These tools can be used to test source extraction codes, or to evaluate the reliability of any map-based science (stacking, dropout identification, etc.).

  11. Measurement of excitation functions in alpha induced reactions on natCu

    NASA Astrophysics Data System (ADS)

    Shahid, Muhammad; Kim, Kwangsoo; Kim, Guinyun; Zaman, Muhammad; Nadeem, Muhammad

    2015-09-01

    The excitation functions of 66,67,68Ga, 62,63,65Zn, 61,64Cu, and 58,60Co radionuclides in the natCu(α, x) reaction were measured in the energy range from 15 to 42 MeV by using a stacked-foil activation method at the MC-50 cyclotron of the Korean Institute of Radiological and Medical Sciences. The measured results were compared with the literature data as well as the theoretical values obtained from the TENDL-2013 and TENDL-2014 libraries based on the TALYS-1.6 code. The integral yields for thick targets of the produced radionuclides were also determined from the measured excitation functions and the stopping power of natural copper.

  12. S-Genius, a universal software platform with versatile inverse problem resolution for scatterometry

    NASA Astrophysics Data System (ADS)

    Fuard, David; Troscompt, Nicolas; El Kalyoubi, Ismael; Soulan, Sébastien; Besacier, Maxime

    2013-05-01

    S-Genius is a new universal scatterometry platform, which gathers all the LTM-CNRS know-how regarding the rigorous electromagnetic computation and several inverse problem solver solutions. This software platform is built to be a userfriendly, light, swift, accurate, user-oriented scatterometry tool, compatible with any ellipsometric measurements to fit and any types of pattern. It aims to combine a set of inverse problem solver capabilities — via adapted Levenberg- Marquard optimization, Kriging, Neural Network solutions — that greatly improve the reliability and the velocity of the solution determination. Furthermore, as the model solution is mainly vulnerable to materials optical properties, S-Genius may be coupled with an innovative material refractive indices determination. This paper will a little bit more focuses on the modified Levenberg-Marquardt optimization, one of the indirect method solver built up in parallel with the total SGenius software coding by yours truly. This modified Levenberg-Marquardt optimization corresponds to a Newton algorithm with an adapted damping parameter regarding the definition domains of the optimized parameters. Currently, S-Genius is technically ready for scientific collaboration, python-powered, multi-platform (windows/linux/macOS), multi-core, ready for 2D- (infinite features along the direction perpendicular to the incident plane), conical, and 3D-features computation, compatible with all kinds of input data from any possible ellipsometers (angle or wavelength resolved) or reflectometers, and widely used in our laboratory for resist trimming studies, etching features characterization (such as complex stack) or nano-imprint lithography measurements for instance. The work about kriging solver, neural network solver and material refractive indices determination is done (or about to) by other LTM members and about to be integrated on S-Genius platform.

  13. A Degree Distribution Optimization Algorithm for Image Transmission

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Yang, Junjie

    2016-09-01

    Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.

  14. The Geoinformatica free and open source software stack

    NASA Astrophysics Data System (ADS)

    Jolma, A.

    2012-04-01

    The Geoinformatica free and open source software (FOSS) stack is based mainly on three established FOSS components, namely GDAL, GTK+, and Perl. GDAL provides access to a very large selection of geospatial data formats and data sources, a generic geospatial data model, and a large collection of geospatial analytical and processing functionality. GTK+ and the Cairo graphics library provide generic graphics and graphical user interface capabilities. Perl is a programming language, for which there is a very large set of FOSS modules for a wide range of purposes and which can be used as an integrative tool for building applications. In the Geoinformatica stack, data storages such as FOSS RDBMS PostgreSQL with its geospatial extension PostGIS can be used below the three above mentioned components. The top layer of Geoinformatica consists of a C library and several Perl modules. The C library comprises a general purpose raster algebra library, hydrological terrain analysis functions, and visualization code. The Perl modules define a generic visualized geospatial data layer and subclasses for raster and vector data and graphs. The hydrological terrain functions are already rather old and they suffer for example from the requirement of in-memory rasters. Newer research conducted using the platform include basic geospatial simulation modeling, visualization of ecological data, linking with a Bayesian network engine for spatial risk assessment in coastal areas, and developing standards-based distributed water resources information systems in Internet. The Geoinformatica stack constitutes a platform for geospatial research, which is targeted towards custom analytical tools, prototyping and linking with external libraries. Writing custom analytical tools is supported by the Perl language and the large collection of tools that are available especially in GDAL and Perl modules. Prototyping is supported by the GTK+ library, the GUI tools, and the support for object-oriented programming in Perl. New feature types, geospatial layer classes, and tools as extensions with specific features can be defined, used, and studied. Linking with external libraries is possible using the Perl foreign function interface tools or with generic tools such as Swig. We are interested in implementing and testing linking Geoinformatica with existing or new more specific hydrological FOSS.

  15. Optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme for Intel Many Integrated Core (MIC) architecture

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.

  16. Development of a stacked ensemble model for forecasting and analyzing daily average PM2.5 concentrations in Beijing, China.

    PubMed

    Zhai, Binxu; Chen, Jianguo

    2018-04-18

    A stacked ensemble model is developed for forecasting and analyzing the daily average concentrations of fine particulate matter (PM 2.5 ) in Beijing, China. Special feature extraction procedures, including those of simplification, polynomial, transformation and combination, are conducted before modeling to identify potentially significant features based on an exploratory data analysis. Stability feature selection and tree-based feature selection methods are applied to select important variables and evaluate the degrees of feature importance. Single models including LASSO, Adaboost, XGBoost and multi-layer perceptron optimized by the genetic algorithm (GA-MLP) are established in the level 0 space and are then integrated by support vector regression (SVR) in the level 1 space via stacked generalization. A feature importance analysis reveals that nitrogen dioxide (NO 2 ) and carbon monoxide (CO) concentrations measured from the city of Zhangjiakou are taken as the most important elements of pollution factors for forecasting PM 2.5 concentrations. Local extreme wind speeds and maximal wind speeds are considered to extend the most effects of meteorological factors to the cross-regional transportation of contaminants. Pollutants found in the cities of Zhangjiakou and Chengde have a stronger impact on air quality in Beijing than other surrounding factors. Our model evaluation shows that the ensemble model generally performs better than a single nonlinear forecasting model when applied to new data with a coefficient of determination (R 2 ) of 0.90 and a root mean squared error (RMSE) of 23.69μg/m 3 . For single pollutant grade recognition, the proposed model performs better when applied to days characterized by good air quality than when applied to days registering high levels of pollution. The overall classification accuracy level is 73.93%, with most misclassifications made among adjacent categories. The results demonstrate the interpretability and generalizability of the stacked ensemble model. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  17. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  18. CometBoards Users Manual Release 1.0

    NASA Technical Reports Server (NTRS)

    Guptill, James D.; Coroneos, Rula M.; Patnaik, Surya N.; Hopkins, Dale A.; Berke, Lazlo

    1996-01-01

    Several nonlinear mathematical programming algorithms for structural design applications are available at present. These include the sequence of unconstrained minimizations technique, the method of feasible directions, and the sequential quadratic programming technique. The optimality criteria technique and the fully utilized design concept are two other structural design methods. A project was undertaken to bring all these design methods under a common computer environment so that a designer can select any one of these tools that may be suitable for his/her application. To facilitate selection of a design algorithm, to validate and check out the computer code, and to ascertain the relative merits of the design tools, modest finite element structural analysis programs based on the concept of stiffness and integrated force methods have been coupled to each design method. The code that contains both these design and analysis tools, by reading input information from analysis and design data files, can cast the design of a structure as a minimum-weight optimization problem. The code can then solve it with a user-specified optimization technique and a user-specified analysis method. This design code is called CometBoards, which is an acronym for Comparative Evaluation Test Bed of Optimization and Analysis Routines for the Design of Structures. This manual describes for the user a step-by-step procedure for setting up the input data files and executing CometBoards to solve a structural design problem. The manual includes the organization of CometBoards; instructions for preparing input data files; the procedure for submitting a problem; illustrative examples; and several demonstration problems. A set of 29 structural design problems have been solved by using all the optimization methods available in CometBoards. A summary of the optimum results obtained for these problems is appended to this users manual. CometBoards, at present, is available for Posix-based Cray and Convex computers, Iris and Sun workstations, and the VM/CMS system.

  19. Multimodal Freight Distribution to Support Increased Port Operations

    DOT National Transportation Integrated Search

    2016-10-01

    To support improved port operations, three different aspects of multimodal freight distribution are investigated: (i) Efficient load planning for double stack trains at inland ports; (ii) Optimization of a multimodal network for environmental sustain...

  20. Final Technical Report: Affordable, High-Performance, Intermediate Temperature Solid Oxide Fuel Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blackburn, Bryan M.; Bishop, Sean; Gore, Colin

    In this project, we improved the power output and voltage efficiency of our intermediate temperature solid oxide fuel cells (IT-SOFCs) with a focus on ~600 °C operation. At these temperatures and with the increased power density (i.e., fewer cells for same power output), the stack cost should be greatly reduced while extending durability. Most SOFC stacks operate at temperatures greater than 800 °C. This can greatly increase the cost of the system (stacks and BOP) as well as maintenance costs since the most common degradation mechanisms are thermally driven. Our approach uses no platinum group metal (PGM) materials and themore » lower operating temperature allows use of simple stainless steel interconnects and commercial off-the-shelf gaskets in the stack. Furthermore, for combined heating and power (CHP) applications the stack exhaust still provides “high quality” waste heat that can be recovered and used in a chiller or boiler. The anticipated performance, durability, and resulting cost improvements (< $700/kWe) will also move us closer to reaching the full potential of this technology for distributed generation (DG) and residential/commercial CHP. This includes eventual extension to cleaner, more efficient portable generators, auxiliary power units (APUs), and range extenders for transportation. The research added to the understanding of the area investigated by exploring various methods for increasing power density (Watts/square centimeter of active area in each cell) and increasing cell efficiency (increasing the open circuit voltage, or cell voltage with zero external electrical current). The results from this work demonstrated an optimized cell that had greater than 1 W/cm2 at 600 °C and greater than 1.6 W/cm2 at 650 °C. This was demonstrated in large format sizes using both 5 cm by 5 cm and 10 cm by 10 cm cells. Furthermore, this work demonstrated that high stability (no degradation over > 500 hours) can be achieved together with high performance in large format cells as large as 10 cm by 10 cm when operated at ~600 °C. The project culminated in the demonstration of a 12-cell stack using the porous anode-based SOFC technology.« less

  1. The InSAR Scientific Computing Environment

    NASA Technical Reports Server (NTRS)

    Rosen, Paul A.; Gurrola, Eric; Sacco, Gian Franco; Zebker, Howard

    2012-01-01

    We have developed a flexible and extensible Interferometric SAR (InSAR) Scientific Computing Environment (ISCE) for geodetic image processing. ISCE was designed from the ground up as a geophysics community tool for generating stacks of interferograms that lend themselves to various forms of time-series analysis, with attention paid to accuracy, extensibility, and modularity. The framework is python-based, with code elements rigorously componentized by separating input/output operations from the processing engines. This allows greater flexibility and extensibility in the data models, and creates algorithmic code that is less susceptible to unnecessary modification when new data types and sensors are available. In addition, the components support provenance and checkpointing to facilitate reprocessing and algorithm exploration. The algorithms, based on legacy processing codes, have been adapted to assume a common reference track approach for all images acquired from nearby orbits, simplifying and systematizing the geometry for time-series analysis. The framework is designed to easily allow user contributions, and is distributed for free use by researchers. ISCE can process data from the ALOS, ERS, EnviSAT, Cosmo-SkyMed, RadarSAT-1, RadarSAT-2, and TerraSAR-X platforms, starting from Level-0 or Level 1 as provided from the data source, and going as far as Level 3 geocoded deformation products. With its flexible design, it can be extended with raw/meta data parsers to enable it to work with radar data from other platforms

  2. Comparative study on sample stacking by moving reaction boundary formed with weak acid and weak or strong base in capillary electrophoresis: II. Experiments.

    PubMed

    Zhang, Wei; Fan, Liuyin; Shao, Jing; Li, Si; Li, Shan; Cao, Chengxi

    2011-04-15

    To demonstrate the theoretic method on the stacking of zwitterion with moving reaction boundary (MRB) in the accompanying paper, the relevant experiments were performed. The experimental results quantitatively show that (1) MRB velocity, including the comparisons between MRB and zwitterionic velocities, possesses key importance to the design of MRB stacking; (2) a much long front alkaline plug without sample should be injected before the sample injection for a complete stacking of zwitterion if sample buffer is prepared with strong base, conversely no such plug is needed if using a weak base as the sample buffer with proper concentration and pH value; (3) the presence of salt in MRB system holds dramatic effect on the MRB stacking if sample solution is a strong base, but has no effect if a weak alkali is used as sample solution; (4) all of the experiments of this paper, including the previous work, quantitatively manifest the theory and predictions shown in the accompanying paper. In addition, the so-called derivative MRB-induced re-stacking and transient FASI-induced re-stacking were also observed during the experiments, and the relevant mechanisms were briefly demonstrated with the results. The theory and its calculation procedures developed in the accompanying paper can be well used for the predictions to the MRB stacking of zwitterion in CE. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks.

    PubMed

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-02-01

    Hybrid mobile applications (apps) combine the features of Web applications and "native" mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources-file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies "bridges" that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources-the ability to read and write contacts list, local files, etc.-to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content and explain why they are ineffectual. We then present NoFrak, a capability-based defense against fracking attacks. NoFrak is platform-independent, compatible with any framework and embedded browser, requires no changes to the code of the existing hybrid apps, and does not break their advertising-supported business model.

  4. Scalable video transmission over Rayleigh fading channels using LDPC codes

    NASA Astrophysics Data System (ADS)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  5. Simulation of profile evolution from ramp-up to ramp-down and optimization of tokamak plasma termination with the RAPTOR code

    NASA Astrophysics Data System (ADS)

    Teplukhina, A. A.; Sauter, O.; Felici, F.; Merle, A.; Kim, D.; the TCV Team; the ASDEX Upgrade Team; the EUROfusion MST1 Team

    2017-12-01

    The present work demonstrates the capabilities of the transport code RAPTOR as a fast and reliable simulator of plasma profiles for the entire plasma discharge, i.e. from ramp-up to ramp-down. This code focuses, at this stage, on the simulation of electron temperature and poloidal flux profiles using prescribed equilibrium and some kinetic profiles. In this work we extend the RAPTOR transport model to include a time-varying plasma equilibrium geometry and verify the changes via comparison with ATSRA code simulations. In addition a new ad hoc transport model based on constant gradients and suitable for simulations of L-H and H-L mode transitions has been incorporated into the RAPTOR code and validated with rapid simulations of the time evolution of the safety factor and the electron temperature over the entire AUG and TCV discharges. An optimization procedure for the plasma termination phase has also been developed during this work. We define the goal of the optimization as ramping down the plasma current as fast as possible while avoiding any disruptions caused by reaching physical or technical limits. Our numerical study of this problem shows that a fast decrease of plasma elongation during current ramp-down can help in reducing plasma internal inductance. An early transition from H- to L-mode allows us to reduce the drop in poloidal beta, which is also important for plasma MHD stability and control. This work shows how these complex nonlinear interactions can be optimized automatically using relevant cost functions and constraints. Preliminary experimental results for TCV are demonstrated.

  6. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  7. Time-Reversal Based Range Extension Technique for Ultra-Wideband (UWB) Sensors and Applications in Tactical Communications and Networking

    DTIC Science & Technology

    2007-07-16

    issue to find a proper acquisition strategy and to optimize the algorithm. So far a two-stage acquisition algorithm based on the optical orthogonal...vol.5, May 11-15, 2003, pp. 3530-3534. [23] M. Weisenhorn and W. Hirt, "Robust noncoherent receiver exploiting UWB channel properties," in Proc. IEEE...PRF) and data rate, are programmable. I Depending on the propagation environments, either the Barker code or the optical orthogonal codes (OOC) [53

  8. An Unconventional Inchworm Actuator Based on PZT/ERFs Control Technology

    PubMed Central

    Liu, Guojun; Zhang, Yanyan; Liu, Jianfang; Li, Jianqiao; Tang, Chunxiu; Wang, Tengfei; Yang, Xuhao

    2016-01-01

    An unconventional inchworm actuator for precision positioning based on piezoelectric (PZT) actuation and electrorheological fluids (ERFs) control technology is presented. The actuator consists of actuation unit (PZT stack pump), fluid control unit (ERFs valve), and execution unit (hydraulic actuator). In view of smaller deformation of PZT stack, a new structure is designed for actuation unit, which integrates the advantages of two modes (namely, diaphragm type and piston type) of the volume changing of pump chamber. In order to improve the static shear yield strength of ERFs, a composite ERFs valve is designed, which adopts the series-parallel plate compound structure. The prototype of the inchworm actuator has been designed and manufactured in the lab. Systematic test results indicate that the displacement resolution of the unconventional inchworm actuator reaches 0.038 μm, and the maximum driving force and velocity are 42 N, 14.8 mm/s, respectively. The optimal working frequency for the maximum driving velocity is 120 Hz. The complete research and development processes further confirm the feasibility of developing a new type of inchworm actuator with high performance based on PZT actuation and ERFs control technology, which provides a reference for the future development of a new type of actuator. PMID:27022234

  9. An Unconventional Inchworm Actuator Based on PZT/ERFs Control Technology.

    PubMed

    Liu, Guojun; Zhang, Yanyan; Liu, Jianfang; Li, Jianqiao; Tang, Chunxiu; Wang, Tengfei; Yang, Xuhao

    2016-01-01

    An unconventional inchworm actuator for precision positioning based on piezoelectric (PZT) actuation and electrorheological fluids (ERFs) control technology is presented. The actuator consists of actuation unit (PZT stack pump), fluid control unit (ERFs valve), and execution unit (hydraulic actuator). In view of smaller deformation of PZT stack, a new structure is designed for actuation unit, which integrates the advantages of two modes (namely, diaphragm type and piston type) of the volume changing of pump chamber. In order to improve the static shear yield strength of ERFs, a composite ERFs valve is designed, which adopts the series-parallel plate compound structure. The prototype of the inchworm actuator has been designed and manufactured in the lab. Systematic test results indicate that the displacement resolution of the unconventional inchworm actuator reaches 0.038 μm, and the maximum driving force and velocity are 42 N, 14.8 mm/s, respectively. The optimal working frequency for the maximum driving velocity is 120 Hz. The complete research and development processes further confirm the feasibility of developing a new type of inchworm actuator with high performance based on PZT actuation and ERFs control technology, which provides a reference for the future development of a new type of actuator.

  10. [Complexity level simulation in the German diagnosis-related groups system: the financial effect of coding of comorbidity diagnostics in urology].

    PubMed

    Wenke, A; Gaber, A; Hertle, L; Roeder, N; Pühse, G

    2012-07-01

    Precise and complete coding of diagnoses and procedures is of value for optimizing revenues within the German diagnosis-related groups (G-DRG) system. The implementation of effective structures for coding is cost-intensive. The aim of this study was to prove whether higher costs can be refunded by complete acquisition of comorbidities and complications. Calculations were based on DRG data of the Department of Urology, University Hospital of Münster, Germany, covering all patients treated in 2009. The data were regrouped and subjected to a process of simulation (increase and decrease of patient clinical complexity levels, PCCL) with the help of recently developed software. In urology a strong dependency of quantity and quality of coding of secondary diagnoses on PCCL and subsequent profits was found. Departmental budgetary procedures can be optimized when coding is effective. The new simulation tool can be a valuable aid to improve profits available for distribution. Nevertheless, calculation of time use and financial needs by this procedure are subject to specific departmental terms and conditions. Completeness of coding of (secondary) diagnoses must be the ultimate administrative goal of patient case documentation in urology.

  11. Acceleration of block-matching algorithms using a custom instruction-based paradigm on a Nios II microprocessor

    NASA Astrophysics Data System (ADS)

    González, Diego; Botella, Guillermo; García, Carlos; Prieto, Manuel; Tirado, Francisco

    2013-12-01

    This contribution focuses on the optimization of matching-based motion estimation algorithms widely used for video coding standards using an Altera custom instruction-based paradigm and a combination of synchronous dynamic random access memory (SDRAM) with on-chip memory in Nios II processors. A complete profile of the algorithms is achieved before the optimization, which locates code leaks, and afterward, creates a custom instruction set, which is then added to the specific design, enhancing the original system. As well, every possible memory combination between on-chip memory and SDRAM has been tested to achieve the best performance. The final throughput of the complete designs are shown. This manuscript outlines a low-cost system, mapped using very large scale integration technology, which accelerates software algorithms by converting them into custom hardware logic blocks and showing the best combination between on-chip memory and SDRAM for the Nios II processor.

  12. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less

  13. Optimization of 3D Field Design

    NASA Astrophysics Data System (ADS)

    Logan, Nikolas; Zhu, Caoxiang

    2017-10-01

    Recent progress in 3D tokamak modeling is now leveraged to create a conceptual design of new external 3D field coils for the DIII-D tokamak. Using the IPEC dominant mode as a target spectrum, the Finding Optimized Coils Using Space-curves (FOCUS) code optimizes the currents and 3D geometry of multiple coils to maximize the total set's resonant coupling. The optimized coils are individually distorted in space, creating toroidal ``arrays'' containing a variety of shapes that often wrap around a significant poloidal extent of the machine. The generalized perturbed equilibrium code (GPEC) is used to determine optimally efficient spectra for driving total, core, and edge neoclassical toroidal viscosity (NTV) torque and these too provide targets for the optimization of 3D coil designs. These conceptual designs represent a fundamentally new approach to 3D coil design for tokamaks targeting desired plasma physics phenomena. Optimized coil sets based on plasma response theory will be relevant to designs for future reactors or on any active machine. External coils, in particular, must be optimized for reliable and efficient fusion reactor designs. Work supported by the US Department of Energy under DE-AC02-09CH11466.

  14. Anode optimization for miniature electronic brachytherapy X-ray sources using Monte Carlo and computational fluid dynamic codes

    PubMed Central

    Khajeh, Masoud; Safigholi, Habib

    2015-01-01

    A miniature X-ray source has been optimized for electronic brachytherapy. The cooling fluid for this device is water. Unlike the radionuclide brachytherapy sources, this source is able to operate at variable voltages and currents to match the dose with the tumor depth. First, Monte Carlo (MC) optimization was performed on the tungsten target-buffer thickness layers versus energy such that the minimum X-ray attenuation occurred. Second optimization was done on the selection of the anode shape based on the Monte Carlo in water TG-43U1 anisotropy function. This optimization was carried out to get the dose anisotropy functions closer to unity at any angle from 0° to 170°. Three anode shapes including cylindrical, spherical, and conical were considered. Moreover, by Computational Fluid Dynamic (CFD) code the optimal target-buffer shape and different nozzle shapes for electronic brachytherapy were evaluated. The characterization criteria of the CFD were the minimum temperature on the anode shape, cooling water, and pressure loss from inlet to outlet. The optimal anode was conical in shape with a conical nozzle. Finally, the TG-43U1 parameters of the optimal source were compared with the literature. PMID:26966563

  15. Deformation induced microtwins and stacking faults in aluminum single crystal.

    PubMed

    Han, W Z; Cheng, G M; Li, S X; Wu, S D; Zhang, Z F

    2008-09-12

    Microtwins and stacking faults in plastically deformed aluminum single crystal were successfully observed by high-resolution transmission electron microscope. The occurrence of these microtwins and stacking faults is directly related to the specially designed crystallographic orientation, because they were not observed in pure aluminum single crystal or polycrystal before. Based on the new finding above, we propose a universal dislocation-based model to judge the preference or not for the nucleation of deformation twins and stacking faults in various face-centered-cubic metals in terms of the critical stress for dislocation glide or twinning by considering the intrinsic factors, such as stacking fault energy, crystallographic orientation, and grain size. The new finding of deformation induced microtwins and stacking faults in aluminum single crystal and the proposed model should be of interest to a broad community.

  16. A tunable electrochromic fabry-perot filter for adaptive optics applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaich, Jonathan David; Kammler, Daniel R.; Ambrosini, Andrea

    2006-10-01

    The potential for electrochromic (EC) materials to be incorporated into a Fabry-Perot (FP) filter to allow modest amounts of tuning was evaluated by both experimental methods and modeling. A combination of chemical vapor deposition (CVD), physical vapor deposition (PVD), and electrochemical methods was used to produce an ECFP film stack consisting of an EC WO{sub 3}/Ta{sub 2}O{sub 5}/NiO{sub x}H{sub y} film stack (with indium-tin-oxide electrodes) sandwiched between two Si{sub 3}N{sub 4}/SiO{sub 2} dielectric reflector stacks. A process to produce a NiO{sub x}H{sub y} charge storage layer that freed the EC stack from dependence on atmospheric humidity and allowed construction ofmore » this complex EC-FP stack was developed. The refractive index (n) and extinction coefficient (k) for each layer in the EC-FP film stack was measured between 300 and 1700 nm. A prototype EC-FP filter was produced that had a transmission at 500 nm of 36%, and a FWHM of 10 nm. A general modeling approach that takes into account the desired pass band location, pass band width, required transmission and EC optical constants in order to estimate the maximum tuning from an EC-FP filter was developed. Modeling shows that minor thickness changes in the prototype stack developed in this project should yield a filter with a transmission at 600 nm of 33% and a FWHM of 9.6 nm, which could be tuned to 598 nm with a FWHM of 12.1 nm and a transmission of 16%. Additional modeling shows that if the EC WO{sub 3} absorption centers were optimized, then a shift from 600 nm to 598 nm could be made with a FWHM of 11.3 nm and a transmission of 20%. If (at 600 nm) the FWHM is decreased to 1 nm and transmission maintained at a reasonable level (e.g. 30%), only fractions of a nm of tuning would be possible with the film stack considered in this study. These tradeoffs may improve at other wavelengths or with EC materials different than those considered here. Finally, based on our limited investigation and material set, the severe absorption associated with the refractive index change suggests that incorporating EC materials into phase correcting spatial light modulators (SLMS) would allow for only negligible phase correction before transmission losses became too severe. However, we would like to emphasize that other EC materials may allow sufficient phase correction with limited absorption, which could make this approach attractive.« less

  17. Correlation and Stacking of Relative Paleointensity and Oxygen Isotope Data

    NASA Astrophysics Data System (ADS)

    Lurcock, P. C.; Channell, J. E.; Lee, D.

    2012-12-01

    The transformation of a depth-series into a time-series is routinely implemented in the geological sciences. This transformation often involves correlation of a depth-series to an astronomically calibrated time-series. Eyeball tie-points with linear interpolation are still regularly used, although these have the disadvantages of being non-repeatable and not based on firm correlation criteria. Two automated correlation methods are compared: the simulated annealing algorithm (Huybers and Wunsch, 2004) and the Match protocol (Lisiecki and Lisiecki, 2002). Simulated annealing seeks to minimize energy (cross-correlation) as "temperature" is slowly decreased. The Match protocol divides records into intervals, applies penalty functions that constrain accumulation rates, and minimizes the sum of the squares of the differences between two series while maintaining the data sequence in each series. Paired relative paleointensity (RPI) and oxygen isotope records, such as those from IODP Site U1308 and/or reference stacks such as LR04 and PISO, are warped using known warping functions, and then the un-warped and warped time-series are correlated to evaluate the efficiency of the correlation methods. Correlations are performed in tandem to simultaneously optimize RPI and oxygen isotope data. Noise spectra are introduced at differing levels to determine correlation efficiency as noise levels change. A third potential method, known as dynamic time warping, involves minimizing the sum of distances between correlated point pairs across the whole series. A "cost matrix" between the two series is analyzed to find a least-cost path through the matrix. This least-cost path is used to nonlinearly map the time/depth of one record onto the depth/time of another. Dynamic time warping can be expanded to more than two dimensions and used to stack multiple time-series. This procedure can improve on arithmetic stacks, which often lose coherent high-frequency content during the stacking process.

  18. Evaluation of Frameworks for HSCT Design Optimization

    NASA Technical Reports Server (NTRS)

    Krishnan, Ramki

    1998-01-01

    This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.

  19. Optimized nonorthogonal transforms for image compression.

    PubMed

    Guleryuz, O G; Orchard, M T

    1997-01-01

    The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.

  20. 150. ARAIII Reactor building (ARA608) Sections. Show highbay section, heater ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    150. ARA-III Reactor building (ARA-608) Sections. Show high-bay section, heater stack, and depth of reactor, piping, and heater pits. Aerojet-general 880-area/GCRE-608-A-3. Date: February 1958. Ineel index code no. 063-0608-00-013-102613. - Idaho National Engineering Laboratory, Army Reactors Experimental Area, Scoville, Butte County, ID

  1. 40 CFR 75.57 - General recordkeeping provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., which may use up to 20 load ranges for stack or fuel flow, as specified in the monitoring plan; (5... SO2 concentration using Codes 1-55 in Table 4a of this section. (2) For flow rate during unit....53; (ii) Date and hour; (iii) Hourly average volumetric flow rate (in scfh, rounded to the nearest...

  2. Simple Numerical Simulation of Strain Measurement

    NASA Technical Reports Server (NTRS)

    Tai, H.

    2002-01-01

    By adopting the basic principle of the reflection (and transmission) of a plane polarized electromagnetic wave incident normal to a stack of films of alternating refractive index, a simple numerical code was written to simulate the maximum reflectivity (transmittivity) of a fiber optic Bragg grating corresponding to various non-uniform strain conditions including photo-elastic effect in certain cases.

  3. Optimization of beam shaping assembly based on D-T neutron generator and dose evaluation for BNCT

    NASA Astrophysics Data System (ADS)

    Naeem, Hamza; Chen, Chaobin; Zheng, Huaqing; Song, Jing

    2017-04-01

    The feasibility of developing an epithermal neutron beam for a boron neutron capture therapy (BNCT) facility based on a high intensity D-T fusion neutron generator (HINEG) and using the Monte Carlo code SuperMC (Super Monte Carlo simulation program for nuclear and radiation process) is proposed in this study. The Monte Carlo code SuperMC is used to determine and optimize the final configuration of the beam shaping assembly (BSA). The optimal BSA design in a cylindrical geometry which consists of a natural uranium sphere (14 cm) as a neutron multiplier, AlF3 and TiF3 as moderators (20 cm each), Cd (1 mm) as a thermal neutron filter, Bi (5 cm) as a gamma shield, and Pb as a reflector and collimator to guide neutrons towards the exit window. The epithermal neutron beam flux of the proposed model is 5.73 × 109 n/cm2s, and other dosimetric parameters for the BNCT reported by IAEA-TECDOC-1223 have been verified. The phantom dose analysis shows that the designed BSA is accurate, efficient and suitable for BNCT applications. Thus, the Monte Carlo code SuperMC is concluded to be capable of simulating the BSA and the dose calculation for BNCT, and high epithermal flux can be achieved using proposed BSA.

  4. Performance optimization of Qbox and WEST on Intel Knights Landing

    NASA Astrophysics Data System (ADS)

    Zheng, Huihuo; Knight, Christopher; Galli, Giulia; Govoni, Marco; Gygi, Francois

    We present the optimization of electronic structure codes Qbox and WEST targeting the Intel®Xeon Phi™processor, codenamed Knights Landing (KNL). Qbox is an ab-initio molecular dynamics code based on plane wave density functional theory (DFT) and WEST is a post-DFT code for excited state calculations within many-body perturbation theory. Both Qbox and WEST employ highly scalable algorithms which enable accurate large-scale electronic structure calculations on leadership class supercomputer platforms beyond 100,000 cores, such as Mira and Theta at the Argonne Leadership Computing Facility. In this work, features of the KNL architecture (e.g. hierarchical memory) are explored to achieve higher performance in key algorithms of the Qbox and WEST codes and to develop a road-map for further development targeting next-generation computing architectures. In particular, the optimizations of the Qbox and WEST codes on the KNL platform will target efficient large-scale electronic structure calculations of nanostructured materials exhibiting complex structures and prediction of their electronic and thermal properties for use in solar and thermal energy conversion device. This work was supported by MICCoM, as part of Comp. Mats. Sci. Program funded by the U.S. DOE, Office of Sci., BES, MSE Division. This research used resources of the ALCF, which is a DOE Office of Sci. User Facility under Contract DE-AC02-06CH11357.

  5. Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing

    2017-04-20

    The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.

  6. Codon Optimizing for Increased Membrane Protein Production: A Minimalist Approach.

    PubMed

    Mirzadeh, Kiavash; Toddo, Stephen; Nørholm, Morten H H; Daley, Daniel O

    2016-01-01

    Reengineering a gene with synonymous codons is a popular approach for increasing production levels of recombinant proteins. Here we present a minimalist alternative to this method, which samples synonymous codons only at the second and third positions rather than the entire coding sequence. As demonstrated with two membrane-embedded transporters in Escherichia coli, the method was more effective than optimizing the entire coding sequence. The method we present is PCR based and requires three simple steps: (1) the design of two PCR primers, one of which is degenerate; (2) the amplification of a mini-library by PCR; and (3) screening for high-expressing clones.

  7. Stacking interactions and DNA intercalation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dr. Shen; Cooper, Valentino R; Thonhauser, Prof. Timo

    2009-01-01

    The relationship between stacking interactions and the intercalation of proflavine and ellipticine within DNA is investigated using a nonempirical van der Waals density functional for the correlation energy. Our results, employing a binary stack model, highlight fundamental, qualitative differences between base-pair base-pair interactions and that of the stacked intercalator base pair system. Most notable result is the paucity of torque which so distinctively defines the Twist of DNA. Surprisingly, this model, when combined with a constraint on the twist of the surrounding base-pair steps to match the observed unwinding of the sugar-phosphate backbone, was sufficient for explaining the experimentally observedmore » proflavine intercalator configuration. Our extensive mapping of the potential energy surface of base-pair intercalator interactions can provide valuable information for future nonempirical studies of DNA intercalation dynamics.« less

  8. Determination of ammonium ion using a reagentless amperometric biosensor based on immobilized alanine dehydrogenase.

    PubMed

    Tan, Ling Ling; Musa, Ahmad; Lee, Yook Heng

    2011-01-01

    The use of the enzyme alanine dehydrogenase (AlaDH) for the determination of ammonium ion (NH(4)(+)) usually requires the addition of pyruvate substrate and reduced nicotinamide adenine dinucleotide (NADH) simultaneously to effect the reaction. This addition of reagents is inconvenient when an enzyme biosensor based on AlaDH is used. To resolve the problem, a novel reagentless amperometric biosensor using a stacked methacrylic membrane system coated onto a screen-printed carbon paste electrode (SPE) for NH(4)(+) ion determination is described. A mixture of pyruvate and NADH was immobilized in low molecular weight poly(2-hydroxyethyl methacrylate) (pHEMA) membrane, which was then deposited over a photocured pHEMA membrane (photoHEMA) containing alanine dehydrogenase (AlaDH) enzyme. Due to the enzymatic reaction of AlaDH and the pyruvate substrate, NH(4)(+) was consumed in the process and thus the signal from the electrocatalytic oxidation of NADH at an applied potential of +0.55 V was proportional to the NH(4)(+) ion concentration under optimal conditions. The stacked methacrylate membranes responded rapidly and linearly to changes in NH(4)(+) ion concentrations between 10-100 mM, with a detection limit of 0.18 mM NH(4)(+) ion. The reproducibility of the amperometrical NH(4)(+) biosensor yielded low relative standard deviations between 1.4-4.9%. The stacked membrane biosensor has been successfully applied to the determination of NH(4)(+) ion in spiked river water samples without pretreatment. A good correlation was found between the analytical results for NH(4)(+) obtained from the biosensor and the Nessler spectrophotometric method.

  9. One-electron oxidation of individual DNA bases and DNA base stacks.

    PubMed

    Close, David M

    2010-02-04

    In calculations performed with DFT there is a tendency of the purine cation to be delocalized over several bases in the stack. Attempts have been made to see if methods other than DFT can be used to calculate localized cations in stacks of purines, and to relate the calculated hyperfine couplings with known experimental results. To calculate reliable hyperfine couplings it is necessary to have an adequate description of spin polarization which means that electron correlation must be treated properly. UMP2 theory has been shown to be unreliable in estimating spin densities due to overestimates of the doubles correction. Therefore attempts have been made to use quadratic configuration interaction (UQCISD) methods to treat electron correlation. Calculations on the individual DNA bases are presented to show that with UQCISD methods it is possible to calculate hyperfine couplings in good agreement with the experimental results. However these UQCISD calculations are far more time-consuming than DFT calculations. Calculations are then extended to two stacked guanine bases. Preliminary calculations with UMP2 or UQCISD theory on two stacked guanines lead to a cation localized on a single guanine base.

  10. Measurement of cross-sections for the 93Nb(p,n)93mMo and 93Nb(p,pn)92mNb reactions up to ∼20 MeV energy

    NASA Astrophysics Data System (ADS)

    Lawriniang, B.; Ghosh, R.; Badwar, S.; Vansola, V.; Santhi Sheela, Y.; Suryanarayana, S. V.; Naik, H.; Naik, Y. P.; Jyrwa, B.

    2018-05-01

    Excitation functions of the 93Nb(p,n)93mMo and 93Nb(p,pn)92mNb reactions were measured from threshold energies to ∼ 20MeV by employing stacked foil activation technique in combination with the off-line γ-ray spectroscopy at the BARC-TIFR Pelletron facility, Mumbai. For the 20 MeV proton beam, the energy degradation along the stack was calculated using the computer code SRIM 2013. The proton beam intensity was determined via the natCu(p,x)62Zn monitor reaction. The experimental data obtained were compared with the theoretical results from TALYS-1.8 as well as with the literature data available in EXFOR. It was found that for the 93Nb(p,n)92mMo reaction, the present data are in close agreement with some of the recent literature data and the theoretical values based on TALYS-1.8 but are lower than the other literature data. In the case of 93Nb(p,pn)93mNb reaction, present data agree very well with the literature data and the theoretical values.

  11. Classification and Sequential Pattern Analysis for Improving Managerial Efficiency and Providing Better Medical Service in Public Healthcare Centers

    PubMed Central

    Chung, Sukhoon; Rhee, Hyunsill; Suh, Yongmoo

    2010-01-01

    Objectives This study sought to find answers to the following questions: 1) Can we predict whether a patient will revisit a healthcare center? 2) Can we anticipate diseases of patients who revisit the center? Methods For the first question, we applied 5 classification algorithms (decision tree, artificial neural network, logistic regression, Bayesian networks, and Naïve Bayes) and the stacking-bagging method for building classification models. To solve the second question, we performed sequential pattern analysis. Results We determined: 1) In general, the most influential variables which impact whether a patient of a public healthcare center will revisit it or not are personal burden, insurance bill, period of prescription, age, systolic pressure, name of disease, and postal code. 2) The best plain classification model is dependent on the dataset. 3) Based on average of classification accuracy, the proposed stacking-bagging method outperformed all traditional classification models and our sequential pattern analysis revealed 16 sequential patterns. Conclusions Classification models and sequential patterns can help public healthcare centers plan and implement healthcare service programs and businesses that are more appropriate to local residents, encouraging them to revisit public health centers. PMID:21818426

  12. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  13. Computerized systems analysis and optimization of aircraft engine performance, weight, and life cycle costs

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.

    1979-01-01

    The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.

  14. An all-digital receiver for satellite audio broadcasting signals using trellis coded quasi-orthogonal code-division multiplexing

    NASA Astrophysics Data System (ADS)

    Braun, Walter; Eglin, Peter; Abello, Ricard

    1993-02-01

    Spread Spectrum Code Division Multiplex is an attractive scheme for the transmission of multiple signals over a satellite transponder. By using orthogonal or quasi-orthogonal spreading codes the interference between the users can be virtually eliminated. However, the acquisition and tracking of the spreading code phase can not take advantage of the code orthogonality since sequential acquisition and Delay-Locked loop tracking depend on correlation with code phases other than the optimal despreading phase. Hence, synchronization is a critical issue in such a system. A demonstration hardware for the verification of the orthogonal CDM synchronization and data transmission concept is being designed and implemented. The system concept, the synchronization scheme, and the implementation are described. The performance of the system is discussed based on computer simulations.

  15. Design Oriented Structural Modeling for Airplane Conceptual Design Optimization

    NASA Technical Reports Server (NTRS)

    Livne, Eli

    1999-01-01

    The main goal for research conducted with the support of this grant was to develop design oriented structural optimization methods for the conceptual design of airplanes. Traditionally in conceptual design airframe weight is estimated based on statistical equations developed over years of fitting airplane weight data in data bases of similar existing air- planes. Utilization of such regression equations for the design of new airplanes can be justified only if the new air-planes use structural technology similar to the technology on the airplanes in those weight data bases. If any new structural technology is to be pursued or any new unconventional configurations designed the statistical weight equations cannot be used. In such cases any structural weight estimation must be based on rigorous "physics based" structural analysis and optimization of the airframes under consideration. Work under this grant progressed to explore airframe design-oriented structural optimization techniques along two lines of research: methods based on "fast" design oriented finite element technology and methods based on equivalent plate / equivalent shell models of airframes, in which the vehicle is modelled as an assembly of plate and shell components, each simulating a lifting surface or nacelle / fuselage pieces. Since response to changes in geometry are essential in conceptual design of airplanes, as well as the capability to optimize the shape itself, research supported by this grant sought to develop efficient techniques for parametrization of airplane shape and sensitivity analysis with respect to shape design variables. Towards the end of the grant period a prototype automated structural analysis code designed to work with the NASA Aircraft Synthesis conceptual design code ACS= was delivered to NASA Ames.

  16. Charge optimized many-body potential for aluminum.

    PubMed

    Choudhary, Kamal; Liang, Tao; Chernatynskiy, Aleksandr; Lu, Zizhe; Goyal, Anuj; Phillpot, Simon R; Sinnott, Susan B

    2015-01-14

    An interatomic potential for Al is developed within the third generation of the charge optimized many-body (COMB3) formalism. The database used for the parameterization of the potential consists of experimental data and the results of first-principles and quantum chemical calculations. The potential exhibits reasonable agreement with cohesive energy, lattice parameters, elastic constants, bulk and shear modulus, surface energies, stacking fault energies, point defect formation energies, and the phase order of metallic Al from experiments and density functional theory. In addition, the predicted phonon dispersion is in good agreement with the experimental data and first-principles calculations. Importantly for the prediction of the mechanical behavior, the unstable stacking fault energetics along the [Formula: see text] direction on the (1 1 1) plane are similar to those obtained from first-principles calculations. The polycrsytal when strained shows responses that are physical and the overall behavior is consistent with experimental observations.

  17. Opportunities for shear energy scaling in bulk acoustic wave resonators.

    PubMed

    Jose, Sumy; Hueting, Raymond J E

    2014-10-01

    An important energy loss contribution in bulk acoustic wave resonators is formed by so-called shear waves, which are transversal waves that propagate vertically through the devices with a horizontal motion. In this work, we report for the first time scaling of the shear-confined spots, i.e., spots containing a high concentration of shear wave displacement, controlled by the frame region width at the edge of the resonator. We also demonstrate a novel methodology to arrive at an optimum frame region width for spurious mode suppression and shear wave confinement. This methodology makes use of dispersion curves obtained from finite-element method (FEM) eigenfrequency simulations for arriving at an optimum frame region width. The frame region optimization is demonstrated for solidly mounted resonators employing several shear wave optimized reflector stacks. Finally, the FEM simulation results are compared with measurements for resonators with Ta2O5/ SiO2 stacks showing suppression of the spurious modes.

  18. 40 CFR 51.118 - Stack height provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... exceeds good engineering practice or by any other dispersion technique, except as provided in § 51.118(b... based on a good engineering practice stack height that exceeds the height allowed by § 51.100(ii) (1) or... actual stack height of any source. (b) The provisions of § 51.118(a) shall not apply to (1) stack heights...

  19. Subspace-Aware Index Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.

    In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less

  20. Variable weight spectral amplitude coding for multiservice OCDMA networks

    NASA Astrophysics Data System (ADS)

    Seyedzadeh, Saleh; Rahimian, Farzad Pour; Glesk, Ivan; Kakaee, Majid H.

    2017-09-01

    The emergence of heterogeneous data traffic such as voice over IP, video streaming and online gaming have demanded networks with capability of supporting quality of service (QoS) at the physical layer with traffic prioritisation. This paper proposes a new variable-weight code based on spectral amplitude coding for optical code-division multiple-access (OCDMA) networks to support QoS differentiation. The proposed variable-weight multi-service (VW-MS) code relies on basic matrix construction. A mathematical model is developed for performance evaluation of VW-MS OCDMA networks. It is shown that the proposed code provides an optimal code length with minimum cross-correlation value when compared to other codes. Numerical results for a VW-MS OCDMA network designed for triple-play services operating at 0.622 Gb/s, 1.25 Gb/s and 2.5 Gb/s are considered.

Top